Graph
Cellular Automata and Neural Networks for Psychometric Introspection
Just an idea. Just for fun. Inspired by a desire to understand myself more rigorously, systematically, and document it. The problem and methods excite me, which is why I chose to write about it instead of the 10 other ideas I’ve had this week, or the 100 others locked up in my notepads. I do not have to do anything, but I can’t do nothing.
Plus it is relevant to neuro-symbolic AI which sounds awesome. Cellular Automata and Graph Neural Networks which I myself do not understand more than how they are relevant only to the problem and for the purpose which I will attempt to describe.
At least one person believes that the a generalized version of this idea could lead to AGI (and validated by being similar to principles described by UC Berkeley Prof Yi Ma here). I am currently neither capable not interested in directly pursuing that line beyond the higher-level philosophy and applications, but feel free to try it yourself.
Introduction
We’re surrounded by AI that wants to entertain us, sell to us, or predict our clicks. Can AI help us understand ourselves well enough to make choices that actually align with who we are? Do you even know who you are? Are you sure you exist? Anyway.
Not by replacing human reflection, but by giving it better tools that allow us to detect patterns and better process the huge amount of data (as well as instruct us on how to acquire it and what kind and amount is most relevant for any desired goal or problem).
Not by imposing external standards, but by revealing the structure of your own thinking. BCIs may decode internal monologues directly and tapping into the global workspace networks may allow direct brain-to-brain telepathy, but for this post I will keep it non-sci-fi, practical, feasible, actionable, focused on exploration, learning.
Not only by offering easy answers, but by asking the questions you didn’t know to ask. But helping you ask better questions that elicit better responses. Training your mind to know yourself and you in relation to the world; better navigate it’s complexity.
Overview of the idea
Imagine having a personal assistant that doesn’t just help you organize tasks—it helps you understand how you think. This project creates a system that maps your mind by analyzing your notes, journals, and daily patterns, then uses artificial intelligence to help you make better decisions about your career, learning, and life direction.
Think of it as creating a “cognitive GPS” that shows you where you are mentally, where you might want to go, and which paths suit your unique way of thinking.
Right now, three challenges face young people especially:
Information overload: Endless content streams (AI slop, brainrot, doomscrolling uncurated dopamine-hijacking social media streams) that erode critical thinking.
Career confusion: Rapid job market changes make it hard to know where your talents fit— it’s been predicted that soon we may have roles we’ve never heard of.
Self-knowledge gap: Traditional personality tests give you static labels, but don’t help you understand yourself in action. Combine with psychometrics and whatever can be inferred from multimodal sensory integration with AI and Obsidian.
Most productivity tools just transcribe what you say or summarize what you write. But your mind isn’t just text—it’s a network of interconnected ideas, values, and patterns. We need tools that understand this deeper structure. Interrelationships.
How It Works
Layer 1: Gathering Your Data
The system collects information about you from multiple sources:
Structured assessments: You take validated cognitive and personality tests that measure things like verbal reasoning, creativity, or how you handle stress.
Your digital garden: Your notes in apps like Obsidian become the raw material—every thought, question, and connection you’ve written down or recorded.
Behavioral patterns: Optional data from wearables or apps that track when you’re most focused, your energy levels, or even how your music choices reflect your mood; forming mappings to physiology to be context-sensitive and personalized.
Layer 2: The AI Companion (Second Brain)
This is where two types of artificial intelligence work together:
The Pattern Finder (Graph Neural Networks)
Think of your notes and thoughts as dots on a map. Some dots (concepts) are close together, some are far apart. A Graph Neural Network (GNN) is an algorithm that studies this map to find:
Your mental hubs: Which ideas do you think about most?
Isolated islands: Topics you haven’t connected yet.
Hidden bridges: Concepts that could link different areas of your thinking.
The GNN doesn’t just count words, it understands meaning- semantics. It knows that “curiosity” and “exploration” are related, even if you never wrote them together. This is the pattern recognition that normies lack and schizophrenics possess too much of.
The Logic Checker (Symbolic AI)
While the GNN finds patterns, the Symbolic AI applies rules and logic. You might tell it “I value autonomy”. “I work best in quiet environments”, “When I’m stressed, don’t schedule creative work”.
The system then checks your plans against these rules. If your calendar shows a brainstorming meeting at 8 PM (when you said you need downtime), it flags the conflict. The founder of Buddi AI is working on such a personal assistant, and I’m looking forward to the insightful convo scheduled for- Oh it’s today, Damn.
The Magic
The GNN discovers patterns you didn’t know existed. The symbolic system keeps these discoveries logically consistent with what you do know. Together, they create a model of your mind whose existence can finally be validated by an insentient AI. It helps you figure out what you know you know, how to do not know that you know, and what you don’t know you don’t know how to know and why and by whom to be known or what it means to know or be known and the nature and limits of knowledge.
Your brain’s magic. Well at least it has or had the potential to be, unless you consumed too much brainrot (Kidding. Neuroplasticity and neurogenesis can be induced.) This computer runs on magic. Electrons are magic. Quoting Arthur Clarke “Any sufficiently advanced technology is indistinguishable from magic.” Let’s COOK!
Layer 3: The Interface
Instead of giving you answers, the system asks you the right questions:
Personalized prompts: “You mentioned flow states 12 times this month, always during solo work. Does your career path allow for this?”
Insight summaries: “Your notes on ‘biological materialism’ connect to 47 other concepts. This seems to be a core principle for you. Have you considered how it relates to your career confusion?” (relation graphs, suggested prompts)
Decision support: When you’re choosing between a PhD and law school, the system shows you which choice aligns better with your documented values, energy patterns, and thinking style. (Validate info. Outreach, connect with relevant persons, track attention and affective trajectory over time..)
Example
Let’s say you’re 19 and torn between studying computer science or psychology.
You set up the system, take a cognitive test, and import your messy collection of notes about everything you’ve been thinking about. The GNN analyzes your notes and discovers: You write most passionately about x. But you also frequently mention y. Your notes on psychology rarely connect to your notes on programming—they’re two separate islands etc. The symbolic system checks this against your test results:
You scored high on verbal reasoning but average on processing speed.
You marked “autonomy” and “impact” as your top values.
The system generates an insight (goal is to train your mind to generate such insights on it’s own, so the AI can be better utilized- recursive bootstrap self-enhancement):
Your graph shows psychology and programming as separate interests, but there’s a hidden connection: computational psychology. Your high verbal reasoning + desire to understand behavior + interest in building suggests you’d thrive in a field like human-computer interaction or AI ethics. Traditional clinical psychology might not satisfy your ‘building’ need, and pure software engineering might not engage your fascination with human behavior. Consider: cognitive science programs that blend both.
You hadn’t thought of this. The system didn’t tell you what to do—it surfaced a pattern in your own thinking that you couldn’t see while living inside your head.
Or if you’re in India, confused, susceptible to societal and parental pressure, risk-averse and not very open to new experiences, you study CSE, obviously. Or you could just resolve the misalignment between abilities and desires by self-modifying, whatevs.
Implications and Motivations
AI Alignment: The hardest problem in artificial intelligence is making sure AI systems understand and respect human values. The multi-billion dollar Alignment problem all of the brightest geniuses worldwide are working on these days. But whose values? Which values? Why? Train AI to understand your values by studying your patterns. If you don’t have any, prepare yourself to be ruled by those who do, I guess.
Cognitive Enhancement: This isn’t about making you smarter. Removing barriers towards freedom, autonomy, authentic self-expression (and it’s creation and discovery best utilizing tech for fulfilment). If you have a high-creativity, low-routine cognitive profile, the system helps you find environments where that’s an asset, not a liability.
Preparing for AGI: As AI becomes more powerful, humans need to get better at understanding themselves—their biases, their blind spots, their actual (not aspirational) values. This system works by helping you train AI by making you think you are helping it help you, while it sneakily brainwashes you instead. Kidding.
What’s Next?
Simply put, it can be executed as follows: Build a basic graph from your notes using free tools, add the AI analysis and generate your first insights, scale it to help others, validating whether the insights actually improve life decisions.
So much is left to chance. We are constantly driven by things and influences, so we must restore agency and intentionality. We often lack the cognitive and affective capabilities to determine what is best for us, but there is obviously the most rational path forward in many situations and ignorance of those paths, and the freely, openly available tools that could help us figure that out, causes a lot of avoidable misery, preventable suffering. I shall develop a curriculum and get it validated by experts, train myself in topics such as logic, set theory, theory of computation, discrete structures and cellular automata (Found some cool books in the library! Nothing beats physical books— from someone who has turned fully digital. Easier to focus for longer periods of time.) and some other stuff I have been putting off for too long and cannot run away from and cannot continue passively admiring from a distance- stats, category theory, neuropsychology, comp neuro and deep learning, real analysis..
Implications for personalized education- speed up learning, track attention and valence. Aristocratic tutoring for all. Zlibrary, archive.ph, SciHub, Social networks. I believe that this time spent thinking and strategizing about learning rather than unorganized, unstructured, unsystematic learning is well-spent but may be approaching the point of diminishing returns and analysis paralysis. Still we must keep in mind the overarching ‘why’; structures and processes that enable it.
Technicals
Will be brief about technical implementation unless anyone wants me to dive deeper, in which case feel free to reach out. PyTorch Geometric (a Python library for GNNs), NetworkX (for basic graph analysis), and Sentence-BERT (to convert your words into mathematical representations the computer can understand, Obsidian (a free note-taking tool— Personal knowledge graph creation (markdown + links)).
WISPR Flow (voice notes), or standard fitness trackers that monitor when you're most focused, your energy levels, or even how your music choices reflect your mood (that’s for another day). Buddi AI: Audio journaling and verbal analysis (possible integration). Wearable APIs (Apple Health/Google Fit): Biometric data collection, SymPy/Z3 Theorem Prover: Symbolic logic and rule validation, PyTorch Geometric: Graph Neural Networks for pattern detection, Retrieval-Augmented Generation (RAG): Connecting LLMs to your knowledge graph, Neo4j/Graph Database: Persistent storage of cognitive architecture Streamlit/Gradio: Interactive dashboard for visualization, Python/JavaScript: Core programming languages for implementation.
Similar projects elsewhere- DARPA FRONT (Neurosymbolic approaches), Arcarae and OMI AI (Founders who are understandably too busy to entertain the invitations of a weirdo like me; I do not mean to sound impolite, just some light-hearted self-mockery. Plus, OMI does audio transcription anyway, Buddi AI does the same, no biggie. Reading about Arcarae was sufficiently inspiring, for the free advertising.)
Estimated development time: 3-6 months for MVP
Cost: $1,000-5,000 for tools/APIs (INR 1-5 lakh)
Thoughts
Huge waste of time, this academic system is. Focus on training thinking skills and mental toolkits universally applicable, transferable across domains. That AI cannot do. That train you to create and spontaneously generate frameworks and prompts, frame the right questions, evaluate the truth, logically reason and execute.
Train till automatic, internalized. motivation can be reprogrammed, modified, enhanced. Feelings can be modulated, modified, reduced, eradicated, changed. They are transient, fickle, unreliable, illusory. Welcome to the era of radical biohacking.
I am going to learn the following topics in January in order to achieve an intuition for their integration, and simultaneously execute the project. Two sources of funding, a Neurolaw- Dana Foundation Grant, and my university- but the intention is to learn, build to learn, and develop the competence to understand and solve it, just following what excites me, no expectations. Open to updates and feedback. Brain is a control system. Hates uncertainty. Reduces uncertainty, prediction error, entropy.
Articulating a coherent and somewhat synthesized version of these ideas gives mental peace, gets me started towards the more ambitious goals concerning cognitive enhancement, community building, affect regulation and modification, immortality etc. We must orient ourselves towards the highest Good that appeals to us, and the largest impact we are capable of having, but of course we are not obligated to it and it won’t be understood by those who are unlikely to regret neglecting it anyway.
In January, I am going to attend an AI alignment summit with some cool alignment people from all over the world. Nice for socialization especially since I had to miss the recent EA event due to exams. This related to alignment through Reinforcement Learning from Human Feedback (RLHF), but I would argue that even before that we need to understand and explore human values. How value systems align, where they converge and diverge, such that conflicts arise. Their origin and transformations etc. The meaning crisis and recent focus on relevance realization (See Dr. John Vervaeke’s work; looking forward to a chat with one of his Physics-CogSci student collaborators this evening!) shows that we lack frameworks and opportunities for the creation of sufficiently meaningful opportunities to understand our value functions. Social media makes it a lot worse- the loneliness epidemic in general- by preventing deep connections, combined with the loss of third places in community.. we could utilize technologies better by not aiming for profit but rather well-being. How? We’ll see.

