⚖️Conversational extraction

Your way of deciding is as personal as your fingerprint

Analyze until paralysis or decide with your gut? Seek the optimal or the sufficient? The AI needs to know.

Herbert Simon won the Nobel in Economics proving humans don't decide the way economic theory predicts. Daniel Kahneman won another proving we don't even know how we decide. This module doesn't judge you — it maps you. Are you an efficient satisficer or a tormented maximizer? Is your intuition reliable or a dangerous shortcut? The answer radically changes how an AI should present options to you.

10-18 minguided conversation
2main constructs
3cross-layer axes: C × ZTPI × NFC

What does this conversation explore?

The decision module presents real and hypothetical scenarios with incomplete information and observes how you react. Do you ask for more data or decide with what you have? Weigh pros and cons or follow a hunch? Decide fast and adjust, or deliberate long and execute?

There's no right answer. A satisficer isn't lazy — they're efficient. A maximizer isn't obsessive — they're meticulous. What matters is that the AI knows which you are, so it presents information in the format your style needs: comparison tables for the analyst, executive summary for the intuitive.

Analysis depth

How much information do you need to decide? From 'enough' to 'exhaustive' — your cutoff point calibrates the density of the AI's responses.

Risk tolerance

Do you decide knowing you might be wrong, or need to minimize risk? This axis affects how the AI presents options under uncertainty.

Satisficing vs. maximizing

Do you seek good enough or the best possible? Simon and Schwartz's distinction that predicts satisfaction with decisions made.

Temporal framing

Do you decide thinking about tomorrow or ten years out? Cross-referenced with your ZTPI time orientation for a complete picture of your decision horizon.

Methodological foundation

The module combines bounded rationality research (Simon, 1955) with Kahneman's (2011) dual-process paradigm. Presented scenarios activate both System 1 (fast, intuitive) and System 2 (slow, analytic), revealing which dominates.

Triangulation with Big Five Conscientiousness and ZTPI time orientation adds predictive depth: a maximizer with high future orientation decides differently than one with present orientation — and the AI needs to distinguish them.

Key references

Schwartz, B., et al. (2002). Maximizing versus satisficing. Journal of Personality and Social Psychology, 83(5), 1178-1197. · Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux. · Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69(1), 99-118.

How does the AI use this?

Your decision style calibrates how the AI presents options. For the maximizer: exhaustive comparison tables. For the satisficer: summary with recommendation. For the intuitive: conclusion first, data second. For the analyst: the reverse.

Without profile

"There are three options: A, B, C. Each has advantages and disadvantages you should evaluate based on your priorities..."

With your Afini profile

"As a satisficer with good intuition: A meets your three main criteria and has no dealbreakers. If you need more detail, I have it — but you probably don't. Decide and adjust on the fly — that's your style."

One gives you three options without criteria. The other knows how much information you need to decide.

Know your decision style

10-18 minutes of scenarios that reveal how you decide — so the AI presents options in the format your brain processes best.

Decision Style — How you decide under pressure — Afini.ai