Example: Trust Decisions

Illustration of agents with thought bubbles and trust links

This example walks through a simulation showing how personality traits - specifically trust propensity - influence behavioral decisions. We will create two entities with different dispositions and observe how the same relationship context produces different trust outcomes.

The Scenario

We want to model two people, Alice and Bob, who have different baseline tendencies to trust others. Alice is naturally trusting (high trust propensity), while Bob is more cautious (moderate trust propensity). Both meet a new colleague, Carol, and we want to compute their willingness to trust her in different domains.

Step 1: Create the Entities

First, we create Alice with a high trust propensity of 0.8 on a 0-1 scale. This represents someone who generally assumes good intentions and gives others the benefit of the doubt.

// Create Alice with high trust propensity
use behaviorsim_rs::entity::EntityBuilder;
use behaviorsim_rs::enums::Species;
use behaviorsim_rs::state::{Disposition, Hexaco};
use behaviorsim_rs::types::Duration;

let alice = EntityBuilder::new()
    .id("alice")
    .species(Species::Human)
    .age(Duration::years(32))
    .hexaco(
        Hexaco::new()
            .with_honesty_humility(0.7)
            .with_neuroticism(0.5)
            .with_extraversion(0.6)
            .with_agreeableness(0.8)
            .with_conscientiousness(0.6)
            .with_openness(0.7),
    )
    .disposition(Disposition::new().with_trust_propensity_base(0.8))
    .build()?;

Now we create Bob with a more moderate trust propensity of 0.5. This represents someone who is neither particularly trusting nor distrusting - they wait for evidence before extending trust.

// Create Bob with moderate trust propensity
let bob = EntityBuilder::new()
    .id("bob")
    .species(Species::Human)
    .age(Duration::years(45))
    .hexaco(
        Hexaco::new()
            .with_honesty_humility(0.6)
            .with_neuroticism(0.4)
            .with_extraversion(0.4)
            .with_agreeableness(0.5)
            .with_conscientiousness(0.7)
            .with_openness(0.5),
    )
    .disposition(Disposition::new().with_trust_propensity_base(0.5))
    .build()?;

Step 2: Create the Target and Add to Simulation

Carol is the person both Alice and Bob will be evaluating for trustworthiness.

// Create Carol (the person being evaluated)
use behaviorsim_rs::simulation::Simulation;
use behaviorsim_rs::types::{EntityId, Timestamp};

let carol = EntityBuilder::new()
    .id("carol")
    .species(Species::Human)
    .age(Duration::years(28))
    .hexaco(
        Hexaco::new()
            .with_honesty_humility(0.65)
            .with_neuroticism(0.5)
            .with_extraversion(0.7)
            .with_agreeableness(0.6)
            .with_conscientiousness(0.75)
            .with_openness(0.6),
    )
    .build()?;

// Add all entities to the simulation
let reference = Timestamp::from_ymd_hms(2024, 1, 1, 0, 0, 0);
let mut sim = Simulation::new(reference);
sim.add_entity(alice, reference);
sim.add_entity(bob, reference);
sim.add_entity(carol, reference);

let alice_id = EntityId::new("alice").unwrap();
let bob_id = EntityId::new("bob").unwrap();
let carol_id = EntityId::new("carol").unwrap();

Step 3: Establish Relationships

We create relationships between Alice-Carol and Bob-Carol. These are new relationships with no history, so trust starts at the baseline level influenced by trust propensity.

// Establish relationships
use behaviorsim_rs::enums::RelationshipSchema;

let formed = Timestamp::from_ymd_hms(2024, 1, 1, 0, 0, 0);
let alice_carol_rel = sim.add_relationship(
    alice_id.clone(),
    carol_id.clone(),
    RelationshipSchema::Peer,
    formed,
);
let bob_carol_rel = sim.add_relationship(
    bob_id.clone(),
    carol_id.clone(),
    RelationshipSchema::Peer,
    formed,
);

Step 4: Query Initial Trust

Before any interactions occur, let us query how much Alice and Bob trust Carol. Remember, trust is decomposed into three components following Mayer's model: Competence, Benevolence, and Integrity.

// Query Alice's initial trust in Carol
use behaviorsim_rs::enums::Direction;

let alice_rel = sim.get_relationship(&alice_carol_rel).unwrap().relationship();
let alice_trust = alice_rel.trustworthiness(Direction::AToB);

println!("Alice's initial trust in Carol:");
println!("  Competence:  {:.2}", alice_trust.competence_effective());
println!("  Benevolence: {:.2}", alice_trust.benevolence_effective());
println!("  Integrity:   {:.2}", alice_trust.integrity_effective());

// Query Bob's initial trust in Carol
let bob_rel = sim.get_relationship(&bob_carol_rel).unwrap().relationship();
let bob_trust = bob_rel.trustworthiness(Direction::AToB);

println!("Bob's initial trust in Carol:");
println!("  Competence:  {:.2}", bob_trust.competence_effective());
println!("  Benevolence: {:.2}", bob_trust.benevolence_effective());
println!("  Integrity:   {:.2}", bob_trust.integrity_effective());

Why the Difference?

Notice that Alice's trust values are higher than Bob's across all three dimensions. This is because:

  • Alice's high trust propensity (0.8) raises her baseline assumptions about others
  • Bob's moderate trust propensity (0.5) produces neutral starting assumptions
  • Neither has evidence about Carol yet, so propensity dominates

This matches psychological reality: some people start from a position of trust, others from skepticism.

Step 5: Compute Trust Decisions

Now we simulate a concrete decision: Would Alice and Bob be willing to delegate an important project to Carol?

// Compute trust decisions for a high-stakes delegation
use behaviorsim_rs::enums::{DispositionPath, StatePath};
use behaviorsim_rs::relationship::StakesLevel;

let query_time = Timestamp::from_ymd_hms(2024, 1, 15, 0, 0, 0);
let alice_state = sim.entity(&alice_id).unwrap().state_at(query_time);
let bob_state = sim.entity(&bob_id).unwrap().state_at(query_time);

let alice_propensity = alice_state
    .get_effective(StatePath::Disposition(DispositionPath::TrustPropensity)) as f32;
let bob_propensity = bob_state
    .get_effective(StatePath::Disposition(DispositionPath::TrustPropensity)) as f32;

let alice_decision = alice_rel.compute_trust_decision(
    Direction::AToB,
    alice_propensity,
    StakesLevel::High,
);
let bob_decision = bob_rel.compute_trust_decision(
    Direction::AToB,
    bob_propensity,
    StakesLevel::High,
);

println!("Willingness to delegate a high-stakes task:");
println!("  Alice: {:.2}", alice_decision.task_willingness());
println!("  Bob:   {:.2}", bob_decision.task_willingness());

Interpreting the Results

Alice's willingness of 0.62 suggests she would likely delegate the project, though she would appreciate some safeguards. Her high trust propensity allows her to take this calculated risk.

Bob's willingness of 0.38 suggests he would be reluctant to delegate without more information. His moderate trust propensity, combined with high stakes and low evidence, produces caution.

Step 6: How Trust Evolves

Now let us add an event: Carol successfully completes a smaller task, demonstrating competence.

// Carol demonstrates competence with a successful work achievement
use behaviorsim_rs::enums::{EventPayload, EventType, LifeDomain};
use behaviorsim_rs::event::EventBuilder;

let achievement_payload = EventPayload::Achievement {
    domain: LifeDomain::Work,
    magnitude: 0.7,
};

let alice_observes = EventBuilder::new(EventType::AchieveGoalMajor)
    .source(carol_id.clone())
    .target(alice_id.clone())
    .severity(0.6)
    .payload(achievement_payload.clone())
    .build()?;

let bob_observes = EventBuilder::new(EventType::AchieveGoalMajor)
    .source(carol_id.clone())
    .target(bob_id.clone())
    .severity(0.6)
    .payload(achievement_payload)
    .build()?;

let event_time = Timestamp::from_ymd_hms(2024, 2, 1, 0, 0, 0);
sim.add_event(alice_observes, event_time);
sim.add_event(bob_observes, event_time);

// Query updated trustworthiness
let alice_updated = sim.get_relationship(&alice_carol_rel).unwrap().relationship();
let bob_updated = sim.get_relationship(&bob_carol_rel).unwrap().relationship();

println!("After competence demonstration:");
println!(
    "Alice's trust - Competence: {:.2}",
    alice_updated.trustworthiness(Direction::AToB).competence_effective()
);
println!(
    "Bob's trust - Competence:   {:.2}",
    bob_updated.trustworthiness(Direction::AToB).competence_effective()
);

Both Alice and Bob update their Ability assessments based on evidence. But notice the difference in magnitude:

  • Alice moved from 0.56 to 0.68 (+0.12)
  • Bob moved from 0.42 to 0.58 (+0.16)

Bob's larger update reflects that he was further from where the evidence points. Evidence has more room to move a skeptic than someone who already believed.

What This Example Demonstrates

Personality Shapes Initial Conditions

Trust propensity creates different starting points for the same relationship. This is why first impressions vary so much between people.

Trust Is Multidimensional

Following Mayer's model, we track Ability, Benevolence, and Integrity separately. Carol might be trusted for competence but not yet for having good intentions.

Context Matters

The same trust levels produce different decisions depending on stakes, reversibility, and available evidence. Trust is always evaluated in context.

Evidence Updates Beliefs

When Carol demonstrates competence, both Alice and Bob update their assessments. The system models belief updating, not just static traits.

Try It Yourself

Experiment with different scenarios:

  • What if Carol makes a mistake? How does trust recover over time?
  • What if Alice and Bob have different relationship contexts with Carol (friend vs. colleague)?
  • How do high-stakes vs. low-stakes decisions change willingness to trust?

The Behavioral Pathways library lets you explore these questions computationally.