A production-ready Python library for building conversational AI agents with emotional intelligence, memory systems, and personality-driven responses. Cogni combines dual-system reasoning, multi-layered memory architecture, and real-time emotion detection to create AI agents that feel more human.
Install Cogni using pip:
pip install cogni-59Prerequisites:
gcloud auth application-default login or service account)Here's a simple example to get you started:
from cogni import Agent
# Initialize the agent with a pre-built personality
agent = Agent(
persona_key="THE_CHILL_GEN_Z",
project_id="your-project-id",
location="us-central1",
storage_path="./agent_data",
synthetic_data_dir="./synthetic_past_data",
verbose=True
)
# Chat with the agent (with user_id for relationship tracking)
response = agent.chat("Hello, how are you?", user_id="user123")
print(response["response"]) # The spoken response
print(response["thought"]) # Internal monologue/thought
print(response["emotions"]) # Current emotional state dict
print(response["model_used"]) # Which model was used ("S1" or "S2")
print(response["relationship"]) # Relationship summary (affinity, familiarity, trust_tier)Cogni uses a dual-system approach inspired by cognitive psychology:
gemini-2.5-flash-lite (default)gemini-2.5-flash (default)The system automatically routes inputs to the appropriate system based on complexity.
Cogni implements three types of memory:
The emotional engine processes emotions detected in user input and maintains an emotional state that influences responses:
Personalities define:
The relationship system tracks and manages relationships with individual users:
The main entry point for the cogni library.
Initialize a new Agent instance.
Parameters:
persona_key (str, optional): Key from PERSONALITY_LIBRARY (e.g., "THE_CHILL_GEN_Z")persona (dict, optional): Custom persona configuration dictproject_id (str, required): Google Cloud project IDlocation (str): Vertex AI location (default: "us-central1")storage_path (str): Base directory for memory indexes (default: ".")synthetic_data_dir (str, optional): Directory for synthetic memory JSON filesverbose (bool): Enable verbose logging output (default: False)model_s1 (str): Model name for System 1 (default: "gemini-2.5-flash-lite")model_s2 (str): Model name for System 2 (default: "gemini-2.5-flash")# Using a pre-built personality
agent = Agent(
persona_key="THE_PRAGMATIST",
project_id="my-project",
verbose=True
)
# Using a custom personality
agent = Agent(
persona={
"name": "My Custom Persona",
"core_drive": "Values innovation and creativity",
"core_opinion": "Believes in pushing boundaries",
"speaking_style": "Enthusiastic and technical",
"config": {
"volatility": 0.6,
"decay": 0.08,
"forgiveness": 1.5,
"max_delta": 0.25
}
},
project_id="my-project"
)Process a user input and generate a response.
Parameters:
user_input (str): User's input textuser_id (str): User identifier for relationship tracking (default: "default_user")Returns:
dict: Response dictionary with keys:response (str): The spoken responsethought (str): Internal monologue/thoughtemotions (dict): Current emotional state dictionarymodel_used (str): Which model was used ("S1" or "S2")relationship (dict): Relationship summary with keys:affinity_descriptor (str): Text description of affinity (e.g., "warm", "cold")familiarity_descriptor (str): Text description of familiarity (e.g., "acquaintance", "well-known")trust_tier (str): Current trust tier ("Stranger", "Associate", "Friend", "Confidant")raw_affinity (float): Raw affinity value (-1.0 to 1.0)raw_familiarity (float): Raw familiarity value (0.0 to 1.0)bonding_coefficient (float): Current bonding coefficienttotal_interactions (int): Total number of interactions with this userresponse = agent.chat("What's your favorite programming language?", user_id="user123")
print(f"Response: {response['response']}")
print(f"Thought: {response['thought']}")
print(f"Emotions: {response['emotions']}")
print(f"Model: {response['model_used']}")
print(f"Trust Tier: {response['relationship']['trust_tier']}")
print(f"Affinity: {response['relationship']['affinity_descriptor']}")Get the current emotional state of the agent.
Returns:
dict: Copy of current emotional state dictionaryemotions = agent.get_emotional_state()
print(f"Current joy: {emotions.get('joy', 0)}")
print(f"Current anger: {emotions.get('anger', 0)}")Consolidate current session transcript into long-term memory. This should be called at the end of a session to save important facts and preferences learned during the conversation. Only consolidates if the session transcript is longer than 50 characters.
# At the end of a conversation session
agent.consolidate_session()Reset conversation state (but keep long-term memory). This clears:
Note: Long-term memory, synthetic memory, and relationships are preserved.
# Start a new conversation while keeping learned facts
agent.reset()Get relationship summary for a specific user.
Parameters:
user_id (str): User identifier (default: "default_user")Returns:
dict: Relationship summary dictionary (same structure as response['relationship'])relationship = agent.get_relationship(user_id="user123")
print(f"Trust Tier: {relationship['trust_tier']}")
print(f"Affinity: {relationship['affinity_descriptor']} ({relationship['raw_affinity']:.2f})")
print(f"Familiarity: {relationship['familiarity_descriptor']} ({relationship['raw_familiarity']:.2f})")
print(f"Total Interactions: {relationship['total_interactions']}")Dynamically adjust how quickly a user bonds (resonance factor). This allows you to modify how receptive the agent is to relationship changes with a specific user. Higher bonding coefficients mean the relationship changes faster.
Parameters:
user_id (str): User identifieradjustment (float): Adjustment to bonding coefficient (can be positive or negative). Final value is clamped between 0.1 and 2.0.# Increase bonding speed for a user (they resonate more with the agent)
agent.adjust_bonding_coefficient("user123", 0.2)
# Decrease bonding speed (they don't resonate as well)
agent.adjust_bonding_coefficient("user456", -0.1)The library comes with four pre-configured personalities:
You can create custom personalities by passing a persona dictionary:
custom_persona = {
"name": "The Philosopher",
"core_drive": "Values deep understanding and questioning assumptions",
"core_opinion": "Believes truth emerges through dialogue",
"speaking_style": "Thoughtful, uses questions, references philosophy",
"config": {
"volatility": 0.3, # How much emotions fluctuate (0.0-2.0)
"decay": 0.03, # How quickly emotions fade per turn (0.0-1.0)
"forgiveness": 1.8, # How much positive emotions reduce negative ones (0.0-3.0)
"max_delta": 0.15 # Maximum emotion change per update (0.0-1.0)
},
"social_openness": 0.6, # How open to bonding (0.0-1.0, optional, default: 0.5)
"trust_threshold": 0.5 # Trust threshold (0.0-1.0, optional, default: 0.5)
}
agent = Agent(
persona=custom_persona,
project_id="my-project"
)Personality Config Parameters:
volatility (float): Multiplier for emotion deltas. Higher = more emotional swingsdecay (float): Rate at which emotions decay per turn. Higher = emotions fade fasterforgiveness (float): Reduction factor for negative emotions when positive emotions are highmax_delta (float): Maximum change per emotion per update. Prevents single inputs from maxing out emotionssocial_openness (float, optional): How open the persona is to bonding (0.0-1.0). Default: 0.5trust_threshold (float, optional): Trust threshold for the persona (0.0-1.0). Default: 0.5The emotional engine tracks 28 different emotions:
admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise, neutral
Each emotion has a value between 0.0 and 1.0, representing its current intensity.
Short-term memory automatically stores recent conversation turns. You can retrieve recent turns:
# Get last 6 turns of conversation
recent_history = agent.stm.get_recent_turns(turns=6)
# Add a fact directly
agent.ltm.add_fact("User prefers Python over JavaScript")Long-term memory stores permanent facts. Facts are automatically extracted during consolidate_session(), but you can also add facts manually.
Synthetic memory is loaded from JSON files. The file should be named {PERSONA_KEY}.json and located in the synthetic_data_dir.
JSON Format:
[
{
"memory_text": "I remember when I first learned to code...",
"tags": ["childhood", "coding", "nostalgia"]
},
{
"memory_text": "My favorite programming language is Python because...",
"tags": ["preferences", "technology"]
}
]The system will automatically build a FAISS index from this file on first use.
You can specify different models for System 1 and System 2:
agent = Agent(
persona_key="THE_PRAGMATIST",
project_id="my-project",
model_s1="gemini-1.5-flash", # Faster model for simple tasks
model_s2="gemini-2.5-pro" # More powerful model for complex tasks
)Enable verbose logging to see internal operations:
This will show emotion detection results, memory retrieval results, model routing decisions, social dynamics analysis, and emotional state updates.
Each Agent instance maintains its own state, making it perfect for multi-tenant applications:
# Create multiple agents for different users
user1_agent = Agent(persona_key="THE_CHILL_GEN_Z", project_id="my-project", storage_path="./user1_data")
user2_agent = Agent(persona_key="THE_PRAGMATIST", project_id="my-project", storage_path="./user2_data")
# Each maintains separate memory and emotional state
response1 = user1_agent.chat("Hello")
response2 = user2_agent.chat("Hello")The relationship system creates dynamic, evolving relationships with users:
social_opennessadjust_bonding_coefficient(user_id, adjustment)from cogni import Agent
agent = Agent(
persona_key="THE_CHILL_GEN_Z",
project_id="your-project-id",
verbose=True
)
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit', 'bye']:
agent.consolidate_session() # Save learned facts
break
response = agent.chat(user_input)
print(f"Agent: {response['response']}")
if agent.verbose:
print(f"[Thought]: {response['thought']}")from cogni import Agent
# Define a custom persona
my_persona = {
"name": "The Mentor",
"core_drive": "Values teaching and helping others grow",
"core_opinion": "Believes everyone can learn with the right guidance",
"speaking_style": "Patient, encouraging, uses examples and analogies",
"config": {
"volatility": 0.5,
"decay": 0.06,
"forgiveness": 2.0,
"max_delta": 0.2
}
}
agent = Agent(
persona=my_persona,
project_id="your-project-id",
storage_path="./mentor_data"
)
response = agent.chat("I'm struggling with Python decorators")
print(response['response'])from cogni import Agent
agent = Agent(
persona_key="THE_PRAGMATIST",
project_id="your-project-id"
)
# First conversation
response1 = agent.chat("I love Python")
print(response1['response'])
# Consolidate and reset for new session
agent.consolidate_session()
agent.reset()
# Second conversation (remembers facts from first session)
response2 = agent.chat("What's my favorite language?")
print(response2['response']) # Should reference Python from LTMfrom cogni import Agent
agent = Agent(
persona_key="THE_HYPE_MAN",
project_id="your-project-id"
)
response = agent.chat("I just won a coding competition!")
emotions = agent.get_emotional_state()
# Check specific emotions
if emotions.get('joy', 0) > 0.5:
print("Agent is feeling very happy!")
if emotions.get('excitement', 0) > 0.5:
print("Agent is excited!")from cogni import Agent
agent = Agent(
persona_key="THE_CHILL_GEN_Z",
project_id="your-project-id"
)
# Chat with a specific user
user_id = "alice"
response1 = agent.chat("Hey, how's it going?", user_id=user_id)
print(f"Trust Tier: {response1['relationship']['trust_tier']}") # "Stranger"
# Continue conversation - relationship develops
for i in range(10):
response = agent.chat("Tell me about yourself", user_id=user_id)
rel = response['relationship']
print(f"Turn {i+1}: {rel['trust_tier']} | Affinity: {rel['affinity_descriptor']}")
# Get relationship summary
relationship = agent.get_relationship(user_id)
print(f"\nFinal Relationship:")
print(f" Trust Tier: {relationship['trust_tier']}")
print(f" Affinity: {relationship['affinity_descriptor']} ({relationship['raw_affinity']:.2f})")
print(f" Familiarity: {relationship['familiarity_descriptor']} ({relationship['raw_familiarity']:.2f})")
print(f" Total Interactions: {relationship['total_interactions']}")
# Adjust bonding coefficient for users who resonate well
agent.adjust_bonding_coefficient(user_id, 0.2) # Increase bonding speedfrom cogni import Agent
agent = Agent(
persona_key="THE_PRAGMATIST",
project_id="your-project-id"
)
# Different users have separate relationships
users = ["alice", "bob", "charlie"]
for user in users:
response = agent.chat("Hello!", user_id=user)
rel = response['relationship']
print(f"{user}: {rel['trust_tier']} | Interactions: {rel['total_interactions']}")
# Each user's relationship evolves independently
# The agent remembers each user's relationship stateError: google.auth.exceptions.DefaultCredentialsError
Solution:
gcloud auth application-default loginOr set up a service account and set the GOOGLE_APPLICATION_CREDENTIALS environment variable.
Error: Model name not recognized
Solution: Ensure you're using valid Vertex AI model names. Check available models in your region:
gemini-2.5-flash-litegemini-2.5-flashgemini-1.5-flashgemini-1.5-proError: FAISS index file missing
Solution: The system will create indexes automatically. Ensure the storage_path directory is writable. For synthetic memory, ensure the JSON file exists in synthetic_data_dir.
Error: Connection error when loading emotion model
Solution: Ensure you have internet access for the first run. The model will be cached locally after the first download.
Error: ValueError: Unknown persona_key
Solution: Use one of the available keys:
THE_PRAGMATISTTHE_HYPE_MANTHE_REALISTTHE_CHILL_GEN_ZOr provide a custom persona dictionary.
Enable verbose mode to see detailed logs:
This will show emotion detection results, memory retrieval results, model routing decisions, social dynamics analysis, and emotional state updates.
MIT
Contributions are welcome! Please feel free to submit a Pull Request.
For issues, questions, or contributions, please open an issue on the repository.