Compilers vs. Conversation: Formal vs. Informal Logic

For the latest entry on iversonsoftware.com, we are looking under the hood of human reasoning to examine the two primary “engines” that drive our conclusions: Formal vs. Informal Logic. While one operates like a strict mathematical compiler, the other functions like a flexible natural language processor. Understanding the difference is the key to both writing perfect code and winning a high-stakes debate.

At Iverson Software, we deal with both strict syntax and user intent. In the world of philosophy, this same divide exists in how we process arguments. Formal Logic is the study of the structure of arguments, while Informal Logic is the study of arguments as they are used in everyday communication.

1. Formal Logic: The Mathematical Syntax

Formal logic (often called Symbolic Logic) is concerned entirely with the form or structure of an argument, rather than its specific content.

    • The Logic of Variables: It replaces words with symbols ($P$, $Q$, $\rightarrow$). It doesn’t care if $P$ stands for “The server is down” or “The moon is made of cheese”; it only cares if the relationship between $P$ and $Q$ is valid.

    • Deductive Certainty: If a formal argument is valid and the premises are true, the conclusion is 100% certain. There is no “opinion” involved—it is a mathematical necessity.

    • The Truth Table: In formal logic, we use tools like truth tables to map out every possible scenario for a set of propositions to ensure the logic never “breaks.”

Shutterstock

2. Informal Logic: The Semantic Processor

Informal logic deals with “Natural Language Arguments.” It’s the logic we use in legal cases, political debates, and business meetings.

  • The Power of Context: Unlike formal logic, informal logic cares deeply about the content, the tone, and the context of the speaker. It deals with nuances that symbols can’t capture.

  • Inductive Probability: Most informal arguments aren’t meant to be “certain”; they are meant to be cogent or persuasive. They provide a high degree of probability rather than an absolute proof.

  • Practical Application: Informal logic is where we study “Fallacies”—common errors in reasoning like the Straw Man or Slippery Slope—which occur because of how language is used, not just how it’s structured.

3. Key Differences: Accuracy vs. Utility

Feature Formal Logic Informal Logic
Medium Symbols and Math Natural Language
Focus Structural Validity Persuasive Strength
Output Certainty (True/False) Probability (Strong/Weak)
Environment Math, CS, Philosophy Law, Media, Daily Life

4. Which One Should You Use?

  • Use Formal Logic when “Bugs” are Fatal: When you are designing an algorithm, building a database schema, or constructing a mathematical proof, you need the absolute rigour of formal logic. A single “syntax error” in your logic can crash the entire system.

  • Use Informal Logic when “Nuance” is King: When you are negotiating a contract, leading a team, or analyzing a news report, you need informal logic. You must be able to detect emotional manipulation, evaluate the credibility of sources, and understand the “implied” meanings that symbols miss.


Why This Matters Today

In 2025, the gap between these two is closing. Neurosymbolic AI is the attempt to build machines that use Formal Logic (to be accurate) and Informal Logic (to understand human intent) simultaneously. By mastering both, you become a “Full-Stack Thinker”—someone who can build rigorous systems and navigate complex human environments with equal skill.

The Internal Audit: A Guide to Critical Reflection

For our latest entry on iversonsoftware.com, we move from the external tools of logic and ethics to the internal process of “System Auditing”: Critical Reflection. While critical thinking focuses on evaluating information, critical reflection focuses on evaluating how we process that information. It is the practice of looking in the mirror to find the “hidden code” driving our decisions.

At Iverson Software, we know that even the best systems need regular reviews to prevent technical debt. Critical Reflection is the human equivalent of a system audit. It is the conscious process of analyzing our experiences, beliefs, and actions to uncover the underlying assumptions that shape our reality. By practicing reflection, we move from being “reactive users” to “intentional architects” of our own lives.

1. Reflection vs. Thinking: What’s the Difference?

It is easy to confuse “thinking about something” with “reflecting on something.”

  • Thinking (The Processing Layer): Aimed at solving a specific problem or reaching a goal (e.g., “How do I fix this bug?”).

  • Critical Reflection (The Meta-Layer): Aimed at understanding the process (e.g., “Why did I assume the bug was in the front-end? What biases led me to overlook the database?”).

2. The Gibbs Reflective Cycle

To make reflection a repeatable process rather than a random thought, philosophers and educators often use the Gibbs Reflective Cycle. This provides a structured “CLI” (Command Line Interface) for your thoughts:

    1. Description: What happened? (The raw log data).

    2. Feelings: What was I thinking and feeling? (The internal state).

    3. Evaluation: What was good and bad about the experience? (The performance review).

    4. Analysis: What sense can I make of the situation? (The root cause analysis).

    5. Conclusion: What else could I have done? (Alternative logic paths).

    6. Action Plan: If it rose again, what would I do? (The system update).

Getty Images

3. Identifying the “Implicit Code” (Assumptions)

The core of critical reflection is uncovering Assumptions. These are the “default settings” of our mind that we often take for granted.

  • Paradigmatic Assumptions: Deep-seated beliefs we view as “objective facts” (e.g., “Hard work always leads to success”).

  • Prescriptive Assumptions: Beliefs about how things should happen (e.g., “A manager should always have the answer”).

  • Causal Assumptions: Beliefs about how things work (e.g., “If I provide data, people will change their minds”). Reflection helps us test if these “if-then” statements are actually true.

4. The Benefits of “Downtime”

In a high-speed digital world, reflection requires intentional “latency.”

  • The Reflection-in-Action: Checking your assumptions while you are doing a task (Real-time monitoring).

  • The Reflection-on-Action: Looking back after the task is finished (Post-mortem analysis). Taking this time allows for Double-Loop Learning—where you don’t just fix a problem, but you change the underlying rules that allowed the problem to occur in the first place.


Why Critical Reflection Matters to Our Readers

  • Professional Growth: By reflecting on your projects, you turn “years of experience” into “years of wisdom,” avoiding the trap of repeating the same mistakes annually.

  • Improved Leadership: Leaders who reflect are more aware of their biases, leading to fairer decision-making and better team morale.

  • Agility: Critical reflection is the engine of adaptability. When the “environment” changes (new tech, shifting markets), reflective individuals can quickly update their mental models to stay relevant.

The Human Interface: Understanding the Science of Perception

For our latest entry in the Epistemology series on iversonsoftware.com, we move from the internal realm of beliefs to the frontline of information gathering: Perception. In the digital world, we rely on sensors and APIs; in the human world, perception is the primary interface through which we “ingest” the reality around us.

At Iverson Software, we build tools that display data. But how does that data actually get processed by the human “operating system”? Perception is the process by which we organize, identify, and interpret sensory information to represent and understand our environment. It is the bridge between the raw signals of the world and the meaningful models in our minds.

1. The Two-Stage Process: Sensation vs. Perception

It is a common mistake to think that what we “see” is exactly what is “there.” In reality, our experience is a two-stage pipeline:

  • Sensation (The Input): This is the raw data capture. Your eyes detect light waves; your ears detect sound frequencies. It is the “raw packet” level of human hardware.

  • Perception (The Processing): This is where the brain takes those raw packets and applies a “rendering engine.” It interprets the light waves as a “tree” or the sound frequencies as “music.”

2. Top-Down vs. Bottom-Up Processing

How does the brain decide what it’s looking at? It uses two different “algorithms”:

  • Bottom-Up Processing: The brain starts with the individual elements (lines, colors, shapes) and builds them up into a complete image. This is how we process unfamiliar data.

  • Top-Down Processing: The brain uses its “cached memory”—prior knowledge and expectations—to fill in the blanks. If you see a blurry shape in your kitchen, you perceive it as a “toaster” because that’s what your internal database expects to see there.

3. The “Glitches”: Optical Illusions and Cognitive Bias

Just like a software bug can cause a display error, our perception can be tricked.

  • Gestalt Principles: Our brains are hard-coded to see patterns and “completeness” even when data is missing. We see “wholes” rather than individual parts.

  • The Müller-Lyer Illusion: Even when we know two lines are the same length, the “rendering” of the arrows at the ends forces our brain to perceive them differently.

  • The Lesson: Perception is not a passive mirror; it is an active construction. We don’t see the world as it is; we see it as our “software” interprets it.

4. Perception in the Age of Synthetic Reality

In 2025, the “Human Interface” is being tested like never before.

  • Virtual and Augmented Reality: These technologies work by “hacking” our perception, providing high-fidelity inputs that trick the brain into rendering a digital world as “real.”

  • Deepfakes: These are designed to bypass our “top-down” filters by providing visual data that perfectly matches our expectations of a specific person’s likeness, making it harder for our internal “authenticity checks” to flag an error.


Why Perception Matters to Our Readers

  • UI/UX Design: Understanding how humans perceive patterns and hierarchy allows us to build software that is intuitive and reduces “cognitive load.”

  • Critical Thinking: Recognizing that our perception is influenced by our biases allows us to “sanity check” our first impressions and look for objective data.

  • Digital Literacy: By understanding how our brains can be tricked, we become more vigilant consumers of visual information in a world of AI-generated content.

The Internal Map: Understanding the Nature of Belief

For our latest entry on iversonsoftware.com, we delve back into the core of Epistemology to examine the engine of human conviction: The Nature of Belief. In a world of data streams and decision trees, understanding what constitutes a “belief” is the first step in auditing our internal software.

At Iverson Software, we specialize in references—external stores of information. But how does that information move from a screen into the “internal database” of your mind? In philosophy, a Belief is a mental state in which an individual holds a proposition to be true. It is the fundamental building block of how we navigate reality.

If knowledge is the “output” we strive for, belief is the “input” that makes the process possible.

1. The “Mental Representation” Model

Most philosophers view a belief as a Mental Representation. Think of it as a map of a territory.

  • The Proposition: A statement about the world (e.g., “The server is online”).

  • The Attitude: Your internal stance toward that statement (e.g., “I accept this as true”).

  • The Map is Not the Territory: A belief can be perfectly held but entirely wrong. Just as a corrupted file doesn’t stop a computer from trying to read it, a false belief still directs human behavior as if it were true.

2. Doxastic Voluntarism: Can You Choose Your Beliefs?

A major debate in the philosophy of mind is whether we have “admin privileges” over our own beliefs.

  • Direct Voluntarism: The idea that you can choose to believe something through a simple act of will. (Most philosophers argue this is impossible; you cannot simply choose to believe the sky is green right now).

  • Indirect Voluntarism: The idea that we influence our beliefs by choosing which data we consume. By auditing our sources and practicing critical thinking, we “train” our minds to adopt more accurate beliefs over time.

3. Occurrent vs. Dispositional Beliefs

Not all beliefs are “active” in your RAM at all times.

  • Occurrent Beliefs: Thoughts currently at the forefront of your mind (e.g., “I am reading this blog”).

  • Dispositional Beliefs: Information stored in your “hard drive” that you aren’t thinking about, but would affirm if asked (e.g., “Paris is the capital of France”). Most of our world-view is composed of these background dispositional beliefs, acting like a silent OS that influences our reactions without us noticing.

4. The Degrees of Belief (Bayesian Epistemology)

In the digital age, we rarely deal in 100% certainty. Modern epistemology often treats belief as a Probability Scale rather than a binary “True/False” switch.

  • Credence: This is the measure of how much “weight” you give to a belief.

  • Bayesian Updating: When you receive new data, you don’t necessarily delete an old belief; you adjust your “confidence score” based on the strength of the new evidence. This is exactly how modern machine learning and spam filters operate.


Why the Nature of Belief Matters to Our Readers

  • Cognitive Debugging: By recognizing that beliefs are just mental maps, you can become more comfortable “updating the software” when those maps are proven inaccurate.

  • Empathy in Communication: Understanding that others operate on different “internal maps” helps in resolving conflicts and building better collaborative systems.

  • Information Resilience: In an era of deepfakes, knowing how beliefs are formed allows you to guard against “code injection”—the process where misinformation is designed to bypass your logical filters and take root in your belief system.

The Architecture of Proof: Understanding Justification in Epistemology

For our latest entry in the Epistemology series on iversonsoftware.com, we move from the general concept of “knowing” to the specific mechanism that makes knowledge possible: Justification. In an era of “alternative facts” and AI-generated hallucinations, understanding how to justify a claim is the ultimate firewall for your intellectual security.

At Iverson Software, we know that a program is only as reliable as its logic. In philosophy, Justification is the “debugging” process for our beliefs. It is the evidence, reasoning, or support that turns a simple opinion into Justified True Belief—the gold standard of knowledge. Without justification, a true belief is just a lucky guess.

1. The Three Pillars of Justification

How do we support a claim? Most epistemologists point to three primary “protocols” for justifying what we think we know:

  • Empirical Evidence (The Hardware Sensor): Justification through direct observation and sensory experience. If you see it, touch it, or measure it with a tool, you have empirical justification.

  • Logical Deduction (The Source Code): Justification through pure reason. If “A = B” and “B = C,” then “A = C.” This doesn’t require looking at the world; it only requires that the internal logic is sound.

  • Reliable Authority (The Trusted API): Justification based on the testimony of experts or established institutions. We justify our belief in quantum physics not because we’ve seen an atom, but because we trust the rigorous peer-review system of science.

2. Foundationalism vs. Coherentism

Philosophers often argue about how the “stack” of justification is built.

  • Foundationalism: The belief that all knowledge rests on a few basic, “self-evident” truths that don’t need further justification. Think of these as the Kernel of your belief system.

  • Coherentism: The idea that justification isn’t a tower, but a web. A belief is justified if it “coheres” or fits perfectly with all your other beliefs. If a new piece of data contradicts everything else you know, the system flags it as an error.

3. The Gettier Problem: When Justification Fails

In 1963, philosopher Edmund Gettier broke the “Justified True Belief” model with a famous “glitch.” He showed that you can have a justified belief that happens to be true, but is still not knowledge because the truth was a result of luck.

  • The Lesson: Justification must be “indefeasible.” In software terms, this means your “test cases” must be robust enough to account for edge cases and random variables.

4. Justification in the Digital Wild West

In 2025, the “burden of proof” has shifted. With deepfakes and algorithmic bias, we must apply Epistemic Vigilance:

  • Source Auditing: Is the “API” providing this information actually reliable?

  • Corroboration: Can this data point be justified by multiple, independent “sensors”?

  • Falsifiability: Is there any evidence that could prove this belief wrong? If not, it isn’t a justified belief; it’s a dogma.


Why Justification Matters to Our Readers

  • Informed Decision-Making: By demanding justification for your business or technical decisions, you reduce risk and avoid “gut-feeling” errors.

  • Combating Misinformation: When you understand the requirements for justification, you become much harder to manipulate by propaganda or unverified claims.

  • Better Communication: When you can clearly state the justification for your ideas, you become a more persuasive and credible leader.

The Foundation of Reason: Why Logic is the Source Code of Knowledge

At Iverson Software, we deal in structured information and educational references. None of these would be possible without Logic. Logic is the study of correct reasoning—the set of rules that allow us to move from a set of premises to a valid conclusion. It is the invisible scaffolding that supports every scientific discovery, every legal argument, and every line of computer code ever written.

1. Deductive Reasoning: The Logic of Necessity

Deductive reasoning moves from the general to the specific. If the premises are true and the structure is valid, the conclusion must be true. This is the heart of mathematical certainty and programming logic.

  • The Syllogism: A classic three-part argument.

    • Major Premise: All humans are mortal.

    • Minor Premise: Socrates is a human.

    • Conclusion: Therefore, Socrates is mortal.

  • In Software: This is the foundation of if-then statements. If a user’s password is correct (Premise A), and the server is active (Premise B), then access is granted (Conclusion).

2. Inductive Reasoning: The Logic of Probability

Inductive reasoning moves from the specific to the general. It involves looking at patterns and drawing probable conclusions. This is the basis of the scientific method and modern Data Analytics.

  • Pattern Recognition: “Every time I have used this software on a Tuesday, it has updated successfully. Therefore, it will likely update successfully next Tuesday.”

  • The Limitation: Unlike deduction, induction doesn’t offer 100% certainty—it offers “statistical confidence.” It is the logic used by AI and machine learning to predict user behavior based on past actions.

3. Boolean Logic: The Language of Machines

In the mid-1800s, George Boole created a system of algebraic logic that reduced human thought to two values: True (1) and False (0). Today, this is the fundamental language of all digital technology.

  • Logical Operators:

    • AND: Both conditions must be true.

    • OR: At least one condition must be true.

    • NOT: The inverse of the condition.

  • Circuitry: These operators are physically etched into CPU transistors as “logic gates,” allowing machines to perform complex calculations at lightning speed.

4. Informal Logic and Fallacies: Debugging Human Thought

While formal logic deals with abstract symbols, Informal Logic deals with everyday language. It helps us identify “bugs” in reasoning known as Logical Fallacies.

  • Ad Hominem: Attacking the person instead of the argument.

  • Straw Man: Misrepresenting an opponent’s position to make it easier to attack.

  • Confirmation Bias: The tendency to only look for “data” that supports our existing premises.

By learning to spot these fallacies, we can “clean” our internal thought processes, much like a developer cleans “spaghetti code” to make it more efficient.


Why Logic Matters to Our Readers

  • Critical Problem Solving: Logic provides a step-by-step framework for troubleshooting any issue, whether it’s a broken script or a complex business decision.

  • Clarity of Communication: When you structure your thoughts logically, you can present your ideas more persuasively and avoid misunderstandings.

  • Digital Literacy: Understanding Boolean logic and syllogisms helps you understand how algorithms work and how AI arrives at its conclusions.

The Science of Knowing: Why Epistemology is the Key to Information Literacy

At Iverson Software, we specialize in educational references. But before you can use a reference, you have to trust it. Epistemology is the branch of philosophy that studies the nature, origin, and limits of human knowledge. It asks the fundamental question: How do we know what we know? By applying epistemological rigor to our digital lives, we can become better researchers, developers, and thinkers.

1. Defining Knowledge: The “JTB” Model

For centuries, philosophers have defined knowledge as Justified True Belief (JTB). To claim you “know” something, three conditions must be met:

  • Belief: You must actually accept the claim as true.

  • Truth: The claim must actually correspond to reality.

  • Justification: You must have sound evidence or reasons for your belief.

In the digital age, “justification” is where the battle for truth is fought. We must constantly audit our sources to ensure our beliefs are built on a solid foundation of data.

2. Rationalism vs. Empiricism: Two Paths to Data

How do we acquire information? Epistemology offers two primary frameworks:

  • Rationalism: The belief that knowledge comes primarily from logic and reason (innate ideas). This is the “source code” of mathematics and pure logic.

  • Empiricism: The belief that knowledge comes primarily from sensory experience and evidence. This is the “user testing” of the scientific method, where we observe and measure the world.

Modern success requires a hybrid approach: using logic to build systems and empirical data to verify that they actually work in the real world.

3. The Problem of Induction and “Black Swans”

Philosopher David Hume famously questioned induction—the practice of assuming the future will resemble the past because it always has.

  • The Bug in the System: Just because a piece of software has never crashed doesn’t prove it never will.

  • Epistemic Humility: Epistemology teaches us to remain open to new evidence that might “falsify” our current understanding, a concept central to both science and agile software development.

4. Epistemology in the Age of AI and Misinformation

With the rise of generative AI and deepfakes, the “limits of knowledge” are being tested like never before. Epistemology provides the toolkit for navigating this:

    • Reliability: How consistent is the process that produced this information?

    • Testability: Can this claim be verified by an independent third party?

    • Cognitive Biases: Recognizing that our own “internal software” often distorts the data we receive (e.g., confirmation bias).

Shutterstock

Why Epistemology Matters to Our Readers

  • Critical Thinking: It moves you from a “passive consumer” of content to an “active auditor” of truth.

  • Better Research: Understanding the nature of evidence helps you find higher-quality sources in any reference library.

  • Information Resilience: In a landscape of “fake news,” epistemology is your firewall against manipulation.

The Logic of Life: Why Philosophy is the Original Operating System

At Iverson Software, we spend a lot of time thinking about structure, logic, and how information is organized. While we often associate these concepts with modern coding, their true roots lie in philosophy. Long before the first line of code was written, philosophers were building the logical frameworks that make modern technology possible.

1. Logic: The Syntax of Thought

The same logic that powers a computer program today—Boolean logic, if-then statements, and syllogisms—was pioneered by thinkers like Aristotle. Philosophy teaches us how to:

  • Deconstruct Arguments: Breaking down complex ideas into their smallest logical parts.

  • Identify Fallacies: Recognizing “bugs” in human reasoning that lead to incorrect conclusions.

  • Define Terms: Ensuring that everyone is operating from the same set of definitions, much like a global variable in a program.

2. Ethics in the Digital Age

As we build more powerful tools and reference systems, the “why” becomes just as important as the “how.” Philosophy provides the ethical compass for:

  • Data Privacy: Navigating the balance between information access and individual rights.

  • Artificial Intelligence: Questioning the moral implications of machines that can “think” or make decisions.

  • Knowledge Accessibility: Determining the faireest ways to share educational resources with the world.

3. Epistemology: How Do We Know What We Know?

Epistemology—the study of knowledge—is at the heart of any reference site. In an era of “information overload,” philosophy helps us distinguish between:

  • Data vs. Wisdom: Raw facts are only useful when they are contextualized by understanding.

  • Reliability: Developing the criteria for what constitutes a “trusted source” in a digital landscape.


Why Philosophy Matters to Our Readers

  • Problem Solving: Philosophy trains the mind to approach problems from first principles.

  • Clarity of Communication: Learning to express complex ideas clearly is a “soft skill” with “hard results” in any profession.

  • Global Perspective: Understanding different philosophical traditions allows us to build tools that are inclusive and universally useful.