Tamagotchis 3.0 - Evolutionary AI For Personal Avatars
Jinni Game Update #8 - Building an AI pipeline to gamify self-actualization thats completely personalized for you, your intentions, and your lifestyle.
Jinni is impossible without genAI
As I mentioned in the intro to Jinni dev blog, I started building the app 6 years ago. I didnt continue it at the time for 2 reasons - the first being it was one of the first things I ever coded when I was 22 and knew I needed to be a much better programmer and entrepreneur if I wanted to unlock the idea’s full potential. The second is that it was infeasible to execute on the game’s core idea of programmatic image editing based on dynamic data at the time.
To have your tomogatchi dynamically evolve everyday requires an army of data scientists and artists working in close collaboration analyzing and redrawing each players’ avatar by hand every day. Obviously time consuming, expensive, inconsistent, privacy issues, etc. I needed a way to programmatically evaluate data based on players’ different goals and intentions in life and then take those generated personalized analyses to programmatically create daily dynamic graphical evolutions of your tamagotchi.
The solution? While I was interested in genetic programming at the time, a form of generative AI, and I knew there must be some way to programmatically generate tamagotchi evolution images, I decided to wait to build the app. Lo-and-behold, Stable Diffusion, DALLE, Midjourney and others came into play with exactly the thing I needed to make this game a reality.
I'm really proud of what I’ve built over the past month (only a week or two of actual work though) and i'm excited to be sharing it with you all. While it doesn’t exceed my exceptional expectations it's a phenomenal start to a very complex problem that i didnt even know how to solve when i first started building the game a few months ago.
High level concepts I wont go into this blog but you can ask an LLM about if you need to:
LLMs vs image generation
Prompt generation pipeline
Agentic systems
Vector embeddings
Prompt Engineering
Problems to solve building generative AI for a tamagotchi based self-actualization game
Before diving into what I actually did, it’s important to know all the problems and constraints that went into it. I mentioned some at a high level in the intro but there are many different components that led to the design solution I came up with.
Need to accommodate the vast diversity in peoples’ goals, lifestyles, apps/services, and preferred tomogatchi aesthetics.
Do not want to be prescriptive about “good” and “bad” actions. No global algorithm for experience / points / social score / etc.
Re #1 + #2: Must interpret players actions based on how they perceive them, not how I dictate them to be.
Re #3: I've never heard of anyone programmatically generating personalized algorithms for people to run customiz data analysis with subjective view points.
Accommodate arbitrary data in personal analysis algorithms. We already standardize data from all data sources so they are interoperable to facilitate this and other data analysis. However we must also consider the fact that people with the same intentions might have different habits and data to influence their jinni.
Ensure that all this programmatically generated code and artifacts are logically consistent or at the very least internally consistent for suspended disbelief in gameplay (e.g. running more could give your jinni more legs which is illogical compared to making existing legs stronger but if we create more legs for everyones jinni everytime they run thats internally consistent allowing players to update their model of how the game works and reliably adjust their strategies)
Final output image quality: Does all of this even generate a jinn that is recognizable as a “living thing”? Is it identifiable as the last jinn we interacted with? Does this jinn evolve in the way we expected it enforcing the core game mechanic?
How do you deal with a lack of data? Do we treat a lack of data as a negative input to incentive players to collect that data or ignore intentions that have no data to associate with it?
Limiting personal customizations in the early alpha phase to better understand and test how AI interprets intentions and how tamagotchis visually evolve while still players choose their own adventure.
Theres many more that I haven't come across yet, can't remember at this time, or cant quite grok or articulate myself.
Which AI models perform better for all the different needs required - generating LLM prompts, analyzing data, image editing (not generation)
I considered using LangChain for all my AI stuff but a simple 5 word comment on a Reddit post dissuaded me. “Just write your own code.” It resonated with me because I’ve intentionally been taking shortcuts on things to get the app pushed out which i dont feel great about the dependency creep on my backend. I also know that this AI component of the app is Jinni’s secret sauce / core competency (I say as I spill all the beans here) and would like to have more control and insight over how it works. So being able to have the most amount of control and flexibility over it, I decided to do everything in clojure on my existing server. No new microservice to deal with devops, not another language to program in, etc. led me to writing it from scratch. I did still use a blog by LangChain to guide the design of my LLM system, mainly their insight that preprogrammed pipelines have better performance than generalized agent frameworks. And TBH im so happy i did, excluding comments and logs for dev, one of the most critical components of the entire Jinni game only takes ~150 lines of code.
Day In The Life Of A Jinn
Now that you understand all the problems I was trying to solve and how they affect how the game can be played, I’ll be walking you guys through the thought process and technical implementation for how your jinni evolves over time. Now you can better understand how to actually play the game and how your actions in the real world affect your virtual avatar. It will be mostly technical with some graphics but you guys should find it interesting nonetheless.
An up to date schematic of my AI evolution process can always be found in the docs on my github repo.
Players set their intentions for self-actualization. You can configure how your jinni looks just like every other widget in the game (TK inventory blog), this allows you to customize your jinni for different base models, aesthetics styles, intentions to guide it, what mood or personality it will have, and analytical and artistic algorithms to evolve it over time. Because of the early testing phase to actually monitor the performance of this whole pipeline, all players will pick from a few predefined options during onboarding and use the same base tamagotchi to evolve from to properly.
Players’ intentions are sent to LLM to generate personal “Analysis Prompt”s. The Analysis Prompt is another LLM prompt generated by the LLM itself that uses a players intentions to figure out what types of behavior a player should have to actualize their intention. An example Analysis Prompt is:
Example Analysis Prompt
Based on my intentions of
cultivating a sexy physique
,expressing creativity daily
, andself-actualizing
, look for actions that align with my ideal attributes ofIntelligence
andCommunity
. Reflect on my daily activities: Am I engaging in intelligent pursuits, such as learning new things or problem-solving? Am I consistently making healthy choices to support my physical well-being and sexy physique? Am I practicing mindfulness and self-reflection to better understand my thoughts and emotions, and make intentional choices that align with my desired growth? If my actions consistently reflect these intentions and attributes, I am on the right path to self-actualization. If not, I must reassess and make adjustments to bring my actions in alignment with my goals Reflect on my daily activities: Am I engaging in intelligent pursuits, such as learning new things or problem-solving? Am I consistently making healthy choices to support my physical well-being and sexy physique? Am I practicing mindfulness and self-reflection to better understand my thoughts and emotions, and make intentional choices that align with my desired growth? If my actions consistently reflect these intentions and attributes, I am on the right path to self-actualization. If not, I must reassess and make adjustments to bring my actions in alignment with my goals.Generate text embeddings from player’s
Analysis Prompt
. These can be used for talking to your jinn about your intentions, discussing how your intentions have changed over time, finding other players with similar intentions to match for summoning circles, and many more AI powered features. This also helps optimize the system so we dont have to generate new prompts everyday, we can reuse your old Analysis Prompt if you havent changed your intentions.Players equip items that feed energy (data) to theirjinn. Your jinn can only evolve based on what you do in the real world, so we need to know what you are doing. Obv! Once you equip items you commonly use e.g. Soundcloud, Github it can affect your jinn’s evolutions
A player’s jinn “conjures” data into the game database. Based on the items you equip, we pull data from each integration to feed your jinn. Apple’s health data will make your jinn skinnier/fatter, Souncloud data will make you jinn happier/sadder. Data is normalized across all data providers so whether you track running in Strava, iHealth, Whoop, etc. it will always be read the same by your jinn (they are digital wizards remember 😉)
Synchronized jinn evolutions are initiated for all players simultaneously. Somehow jinn have some magic thing where they just vibe and do shit at the exact same time. Their daily evolutions is one of those things. They all process the data they have been fed all day and go through spontaneous metamorphosis into their new forms from this evolution process. Idk how it works, I’m not a jinn I just channel their energy through my code.
Analysis Prompt
evaluates player actions and generates anAnalysis Output
. Every players personal prompt from step #2 are imbued with data they generated from step #4 + #5 to see how their actions correlated to their intentions. This creates a structured output of how (AI thinks) you’ve changed since last time your jinn evolved. This extra step to output structured analysis also allows us to rerun image evolutions in the future without having to rerun data analysis allowing players to play around with different image styles and base avatar models without compromising their history. I have an assumption which has been somewhat validated in early testing that structured quantifiable outputs help the AI understand what it's trying to analyze and how it's supposed to think for more reliable results.Example Analysis Output:
{:analysis { :brain 0.1 :reason "Assuming the pet has remained intellectually stimulated, leading to slight improvement in brain development.", :eyes -0.2 :reason "Assuming the pet's eyes have aged slightly, becoming more expressive and nuanced.", :heart 0.15 :reason "Assuming the pet's emotional well-being has remained stable or improved, leading to a slight increase in heart size.", :posture 0.05 :reason "Assuming the pet's posture has remained relatively unchanged, with only slight modifications based on natural aging." }}
There are still some prompt engineering kinks im working out for formatting the structured output which can be seen above. This isnt being used by actual code so its fine but not perfect.
Augmentation Prompt
usesAnalysis Output
to generate an “Augementaion Output” DALL-E prompt that will modify your current jinn into their new form. AnAugmentation Prompt
prompt is a completely templatized LLM prompt that only takes in an LLM generatedAnalysis Output
. It generates a new DALLE prompt daily based on what you have changed since last time you evolved.Example Augmentation Prompt:
{:prompt "Based on the analysis data provided, your avatar will have a brighter and more vibrant color palette to reflect your increased creativity and improved nutrition. The eyes will sparkle with a slightly larger pupil to represent your enhanced focus and energy from increased exercise. The heart shape will be fuller and more defined, symbolizing the growth and nurturing of your relationships. Your posture will appear more upright and confident, reflecting your newfound self-assurance and progress towards self-actualization."}
DALL-E edits your current jinn image using
Augmentation Output
Your jinn’s new image is saved as the base model for the next round of evolutions and appears in your app home screen as your new avatar image.
Game Updates
Initial AI pipeline development almost complete! Still ironing out last details on the recurring image modification. Having issues uploading PNG to DALL-E which is literally the final step and then it’s done. Still need a lot of work on prompt engineering but unclear what direction that will need to take until DALL-E augmentation step is complete.
Starting to go harder on community engagement and direct/structured user questioning. First initiative is for signaling which image people want as the first base tamagotchi which is an obvious thing to crowdsource now that core idea, tech, and UI is validated. Maybe I’ll even do a community call before I fly to Japan next month. I’ll post in the early adopters telegram chat if I do so join if you want to participate. Next will probably be more feedback on abilities, widgets, and integrations people want.
Bank account created!!!! Now we can make real $ not just crypto 😛. Now I’ll be exploring grants and MOU contracts harder.
Got a $2.5k grant in credits from Open AI for image editing!
5 users onboarded to Android App. Not all of them have accounts in the database due to early config issues. 2 players with complete profile + customized homepage widgets setup.
Almost 50 waitlist signups! Mostly for iphone :/ Great data signal to get that up asap. I want to get the core game more streamlined and tested before rolling out to a larger audience but it's great to know I can 10x my players instantly once I want to.
Product analytics dashboards is developing. The top chart shows daily total players, average activity per player, and total activity across all players. The bottom chart lets me see who has created accounts, who has added integrations, and who has started customizing their homepage as steps in onboarding flow and engagement with game mechanics.
Most of this is me of course.
get-playlist
must be 0 because i haven’t built homepage playlists yet. Me and everyone seems excited for this feature so that’s why its my target metric for onboarding.Updated widget config to allow customization per jinni instead of per player. This was mostly to allow for communal jinn which dont belong to a single player and would need different config than the owner. It also allows unlocks the ability for every player to have multiple jinn to customize with different gameplay styles, personalities, and homepage layouts.
Added API endpoint to automatically conjure all data from a player across all existing integrations they’ve added. Now I can very easily update every players’ jinn with no scripting, in app actions, etc. so tomogatchis constantly evolve without players needing to check in everyday.
Created redirects (portals) with analytics tracking. I’ve been making arbitrary links to Jinni owned pages for campaigns or partner sites for referrals. Better analytics for marketing campaigns, cleaner sharable urls, and branded URLs.
Last month the Jinni dev blog on substack had a 7% increase in subscribers, 14% increase in views, and 10% bump in open rates making it almost 50%!
Made my first breaking change to prod server! Had to happen eventually, just a minor syntax mistake when adding some Cypher to an existing database query. With all the changes I’ve made to :Avatar and :Widget which are pretty core to API and backend I think its great this is the first thing thats broken. (it was on a :Widget query but nothing to do with logic, just syntax)
Using my hand as an onboarding tool. I got an NFC chip implanted in my hand at Vitalia and have programmed it to use my new portals to bring people to the new install page on the website
Results for Zuzalu quadratic Funding round are almost finalized. It looks like I’ll receive somewhere between 4-8 ETH in addition to the 1.3 ETH (now $5k since i hodl’d) in direct donations I received. This makes me one of the largest recipients in the round! Thank you, thank you, thank you to everyone that donated and make sure you DM me your donation tx if you haven’t already!
Next steps
Figure out DALL-E image formatting to finish AI divination pipeline. Its a super annoying and basic thing that is blocking literally the most important thing in the game. Classic software development.
Finish home page playlists feature for Spotify. Mostly UI/UX work, backend logic and API should be done already
Integrate Soundcloud. Most players ive talked to use it voer Spotify which is my preferred platform thats currently integrated. Mostly just a copypasta process from Spotify code with new data transmutation spells
Do more user research with Zu event organizers about what a communal jinn would look like, what widgets and actions they want players to take on their profile, what insights they want into community health and how they want to improve it,
Build summoning circle features on backend + frontend for communal jinn. Already have most of it pseudocoded out, just need to refine it based on user research, finish code, and test in the app with Jubmoji cards.