Sugar Addicts: Big Food & the Duty of Care

Written by Joss Duggan (10 min read)

In every supermarket, in every country, we’re faced with thousands of different brands - a startling level of choice that our ancestors could never have imagined. But this proliferation of ‘hyper-palatable foods’ gives us only the illusion of choice; beneath the packaging, the reality is that a handful of corporations control the vast majority of the global food supply.

Just four companies - Cargill, Archer Daniels Midland, Bunge, and Louis Dreyfus - dominate the world’s grain trade, Bayer, Corteva, and Syngenta control most of the seeds that farmers plant, and Nestlé, PepsiCo, Unilever, and Mondelez own a staggering number of the processed foods that fill supermarket shelves.

From breakfast cereals to bottled water, from frozen meals to pet food, a shrinking number of multinational giants dictate what we eat, how it’s produced, and who profits. This is the world of Big Food - a system where corporate consolidation shapes the global diet, often prioritising efficiency and shareholder returns over public health, sustainability, let alone competition and consumer choice.

In an era where public health systems are under unprecedented strain from diet-related disease (i.e., obesity, ischemic heart disease, diabetes, metabolic syndrome, etc) should we be questioning the role of the global food industry?

Does the industry have a ‘duty of care’ towards consumers? And if so…what does that responsibility entail?

What is a Duty of Care?

Stock images for “duty of care” are unsurprisingly poor…

Ultra-processed foods are cheap, convenient, and are literally engineered to be irresistibly tasty - a phenomenon known as “hyper-palatability.” While these foods are convenient and profitable, they often come at a high cost to public health. Their design encourages overconsumption, leading to diet-related diseases that have become all too common. The obesity epidemic, driven in large part by the ubiquity of these foods, diminishes quality of life, fuels chronic diseases, and strains healthcare systems worldwide.

With such control comes the question: should they be held accountable for the health outcomes linked to their products?

What responsibility, if any, does the food industry hold in this context? The idea of a “duty of care” traditionally refers to the obligation of individuals or organisations to avoid actions that could foreseeably harm others. Arguably, this principle should extend to corporations that feed billions of people every day…

A duty of care can be understood in both legal and ethical terms. Legally, it implies accountability for actions that might harm others. Ethically, it stretches beyond mere compliance to include moral obligations that prioritise well-being. For corporations, this raises the question: should their actions aim solely at profit, or should they also consider the broader impact on public health?

A company that knowingly creates and sells a harmful product - whether a defective car, a dangerous drug, or an unsafe toy - can be held legally liable for the damage it causes. The food industry, however, has long evaded this level of accountability by arguing that its products are not inherently harmful but only when consumed in excess.

This argument mirrors the early defense of Big Tobacco, which framed smoking as a matter of personal choice, despite overwhelming evidence that cigarettes were designed to hook users and harm health.

The reality is that food companies do far more than just sell food; they actively shape dietary norms, influence consumer behavior, and lobby against public health policies. Given this level of control, the question isn’t whether they play a role in diet-related disease, but whether they should be obligated to mitigate harm rather than exploit it.




the Ethics of Responsibility

If feeding a population is a company’s primary function, then doesn’t it have an ethical obligation to ensure its products promote health rather than undermine it? Philosophy offers several useful lens through which we can consider this question.

1. Utilitarianism: Does Big Food Maximise Well-Being?

A utilitarian perspective asks whether the net effect of the food industry benefits or harms society. On one hand, Big Food has played a role in reducing global hunger, increasing food accessibility, and driving economic growth. On the other, ultra-processed foods have fuelled an epidemic of preventable disease, placing a crippling burden on healthcare systems.

If the overall impact of an industry leads to a public health crisis, can it claim to be maximising well-being - or has it sacrificed long-term health for short-term profit?

2. Deontological Ethics: Are There Moral Duties Beyond Profit?

Deontological ethics focuses on duty and principles rather than consequences. Should food companies have a moral duty to uphold transparency, honesty, and public well-being, regardless of financial incentives? Yet many companies prioritise legal loopholes over genuine responsibility:

  • High-sugar, high-fat products are marketed as "healthy choices."

  • Children are deliberately targeted with advertisements before they develop impulse control.

  • Companies spend billions lobbying against public health measures that would reduce consumption of harmful foods.

If food companies were required to uphold moral obligations beyond financial gain, how different would the industry look?

3. Virtue Ethics: Are Food Companies Good Corporate Citizens?

Virtue ethics evaluates companies based on character and values. A "good" food company would demonstrate compassion, responsibility, and restraint, ensuring that its products nourish rather than harm.

Instead, many corporations:

  • Comply with the bare minimum of regulation rather than adopting ethical best practices.

  • Actively resist efforts to introduce policies that might reduce diet-related disease.

  • Engineer foods for addiction, prioritizing profits over consumer well-being.

This raises a crucial question: Are these businesses simply playing within the rules, or are they actively exploiting them? A responsible food industry wouldn’t just seek to avoid lawsuits—it would proactively work toward a system that benefits both business and public health.

Healthy citizens are the greatest asset any country can have
— Winston Churchill


A Legal Grey Area

Unlike Big Tobacco or Big Pharma, food corporations operate in something of a legal loophole when it comes to public health accountability. They must comply with basic food safety laws, ensuring products are not immediately dangerous, but there is no overarching legal requirement for them to prioritise long-term health over profit.

Several governments have introduced piecemeal regulations (e.g., sugar taxes, front-of-pack warning labels, and restrictions on marketing to children) but these efforts are reactive, nudging consumer behaviour without addressing the systemic drivers of unhealthy diets. Meanwhile, lawsuits over deceptive branding (e.g., sugary cereals marketed as “heart-healthy”) have occasionally forced rebranding but have not challenged the core business model that profits from overconsumption.

Other industries have not been given the same level of legal immunity.

  • Tobacco companies were ultimately forced to acknowledge their role in fueling lung disease and pay billions in settlements.

  • Pharmaceutical companies have faced legal action for their role in the opioid epidemic, with courts recognising their responsibility for public health harm.

  • Alcohol and gambling industries have been subjected to strict regulations to minimise societal harm.

Given the overwhelming evidence that ultra-processed foods are designed to drive overconsumption and contribute to chronic disease, why should Big Food remain exempt from similar scrutiny?

We should resolve now that the health of this nation is a national concern; that financial barriers in the way of attaining health shall be removed; that the health of all its citizens deserves the help of all the nation
— Harry S. Truman

The food industry has perfected the art of hyper-palatable, ultra-processed foods that override natural hunger cues, flood the brain with dopamine, and encourage compulsive overconsumption. The result? A public health disaster in slow motion. Rates of obesity, type 2 diabetes, and cardiovascular disease—all linked to diet—have skyrocketed, now outpacing smoking as a leading cause of preventable death.

The parallels to Big Tobacco are unavoidable. Decades ago, cigarette companies claimed their products were about personal choice, while privately engineering them to be more addictive. Today, food corporations use "bliss point" formulations - the precise sugar-fat-salt combinations designed to trigger maximum pleasure and keep consumers hooked.

If the same ethical reasoning that held tobacco companies accountable is applied here, then Big Food has a case to answer.

Let food be thy medicine and medicine be thy food
— Hippocrates




What About Personal Responsibility?

Libertarian thinking. behavioural nudges.

Critics might argue that the responsibility lies solely with individuals. The concept of radical personal responsibility places the onus on consumers to make informed choices. However, this perspective overlooks the fact that true responsibility requires awareness and education. People can only make good choices if they are aware of the consequences and equipped with the knowledge to act differently. Yet, even with awareness, behavior change is not guaranteed. Why, for example, do many people knowingly opt for hyper-palatable, less nutritious options despite understanding that chicken and broccoli are healthier? This suggests that factors beyond individual choice are at play.

No-one ever got mugged by a doughnut
— Charles Poliquin

Education Isn’t Enough

While education is a critical component of promoting healthier lifestyles, it cannot be the sole solution. Knowledge doesn’t always translate into action, especially when the competing options are scientifically engineered to be more appealing. The food industry’s aggressive marketing tactics and strategic placement of unhealthy products add another layer of complexity, making it harder for consumers to make healthy choices consistently.

Manipulation and Consumer Autonomy

A key issue lies in how much autonomy consumers truly have when choosing their food. Sophisticated marketing campaigns and food engineering that capitalize on the science of addiction often blur the line between choice and manipulation. If consumer autonomy is compromised, can we still hold individuals fully accountable for their dietary habits?

The argument for personal responsibility often overlooks the socioeconomic barriers that affect food choices. Lower-income communities frequently have less access to healthy, affordable options, complicating the notion of individual responsibility. Addressing these disparities requires systemic change that involves both industry action and public policy.


What should we be doing about it?

Modern foods are literally designed to be irresistible; maximising consumption and profitability…

Unlike the pharmaceutical or tobacco industries, food companies operate in a legal gray area when it comes to responsibility for public health. While they must comply with basic food safety laws, there is no overarching duty of care requiring them to prioritise nutrition over profit.

Governments have attempted to introduce piecemeal regulations - sugar taxes, front-of-pack warning labels, and restrictions on marketing to children - but these efforts are more about nudging consumer behaviour than holding companies to account. Lawsuits over misleading health claims have occasionally forced companies to remove deceptive branding (e.g., “heart-healthy” cereals packed with sugar), yet the fundamental issue remains: there is no legal precedent forcing food giants to acknowledge the role they play in the global rise of diet-related disease.





1. Industry-Led Reforms: A Healthier Business Model

Food companies cannot claim responsibility while continuing to maximize profit through unhealthy products. If they acknowledge a duty of care, the following steps should be fundamental to their operations.

Reformulating Products to Improve Nutritional Quality

  • Reduce excessive sugars, trans fats, and sodium, which are major contributors to obesity, heart disease, and metabolic disorders.

  • Shift toward nutrient-dense formulations, ensuring that products deliver more than just empty calories.

  • Invest in research and development to create alternatives that balance taste, affordability, and nutrition.

Some companies have begun gradually reducing sugar content in soft drinks to avoid alienating consumers. A wider industry shift could normalize healthier versions of familiar products.

Improving Label Transparency

  • Clear, front-of-pack warnings on ultra-processed foods, similar to cigarette-style health warnings in Chile and Mexico.

  • Eliminate misleading branding, such as cereals marketed as "heart-healthy" despite containing high amounts of sugar.

  • Standardized "traffic light" nutrition labels to allow consumers to compare products at a glance.

The Nutri-Score system in Europe provides a simple A–E health rating on food packaging. Expanding this system globally could force manufacturers to compete on health, not just taste and price.

Responsible Marketing Practices

  • Restrict advertising of unhealthy foods to children, as has been implemented in Norway, Chile, and the UK.

  • Ban cartoon mascots and misleading health claims on products high in sugar, fat, or sodium.

  • Limit in-store placement of ultra-processed foods, particularly near checkout areas.

The UK has banned junk food advertising before 9 PM to limit children's exposure. More countries could follow suit, applying restrictions across digital platforms and social media.

2. The Role of Governments: Policy-Driven Accountability

While voluntary corporate responsibility is ideal, history shows that profit-driven industries rarely self-regulate effectively. Governments, therefore, have a role in enforcing accountability and creating a regulatory environment that prioritizes public health.

Taxation and Pricing Strategies to Shift Consumption Patterns

  • Implement sugar taxes on sodas and ultra-processed foods to disincentivize overconsumption.

  • Provide subsidies for fresh produce to reduce the price gap between processed convenience foods and whole foods.

  • Offer incentives for companies reformulating products, rewarding those who reduce harmful ingredients.

After Mexico introduced a sugar tax, soda sales dropped by 7.6% in the first year, with a 17% reduction among low-income households. Revenue from the tax was reinvested in public health initiatives.

Public Health Campaigns and Consumer Education

  • Nationwide campaigns educating consumers about ultra-processed foods and their risks.

  • Mandatory food literacy education in schools so children develop critical thinking skills about food choices.

  • Public service announcements countering misleading industry narratives about diet and health.

Finland ran a public awareness campaign about salt consumption, leading to a 25% decrease in national sodium intake and a 75% drop in deaths from stroke and heart disease over 30 years.

3. Structural Change: Aligning Incentives with Public Health

One of the biggest barriers to change is that ultra-processed foods remain the most profitable segment of the industry. As long as hyper-palatable, low-cost foods generate the highest margins, companies will continue prioritizing them.

To realign incentives:

  • Investors could push food companies to shift toward long-term health-conscious strategies rather than maximizing short-term profits from unhealthy foods.

  • Governments could reward responsible companies through tax benefits and incentives for sustainable, nutritious product lines.

  • Consumer demand could play a role—if people shift their purchasing habits, companies will adapt accordingly.

These structural changes require a coordinated effort between industry, government, and consumers.


Final Considerations

The food industry faces increasing scrutiny over its role in diet-related diseases. While some companies have taken steps toward reform, voluntary action has been slow, and the incentives driving food production remain largely unchanged.

If the industry is serious about upholding a duty of care, it must move beyond regulatory compliance and consider how its products impact long-term public health. Governments, too, have a role in ensuring that economic structures do not favor the mass production of unhealthy foods at the expense of consumer well-being.

Without meaningful change, continued pressure from regulators, public health organizations, and consumers is likely to shape the future of food policy in ways that will make reform unavoidable.

Failing to address the industry’s potential duty of care could have dire consequences for public health and future generations. Rising healthcare costs and diminishing quality of life will likely persist unless meaningful action is taken. This raises an important question: If we accept that the food industry bears some level of responsibility, how do we ensure they fulfill it effectively?

As we consider whether “big food” has a duty of care, it’s essential to recognize the balance between corporate responsibility and individual agency. Both the industry and the government must play proactive roles in fostering a healthier society. And finally, it’s worth asking: what other industries have a duty of care toward their consumers, and what lessons can be drawn from their approaches?

These frameworks don’t offer easy answers, but they force us to ask better questions: Does an industry that profits from overconsumption have a moral duty to limit harm? Should corporations be rewarded for selling what is legal, even if it is harmful? And if companies won’t regulate themselves, should society step in?

If a product is engineered to be addictive, aggressively marketed to vulnerable populations, and directly contributes to preventable disease, should the company behind it be held accountable? This is the question at the heart of the Big Food debate.


Further Reading

Moss, Michael: Salt Sugar Fat: How the Food Giants Hooked Us (2013)

Nestle, Marion: Food Politics: How the Food Industry Influences Nutrition and Health (2002)

Wilson, Bee: The Way We Eat Now: How the Food Revolution Has Transformed Our Lives, Our Bodies, and Our World (2019)

Van Tulleken, Chris: Ultra-Processed People: The Science Behind Food That Isn't Food (2023)

Spector, Tim: Food for Life: The New Science of Eating Well (2022)

Monteiro, Carlos et al.: Ultra-processed foods: What they are and how to identify them (2019)

The Lancet Commission: The Global Syndemic of Obesity, Undernutrition, and Climate Change (2019)

The BMJ: Consumption of ultra-processed foods and risk of mortality: A systematic review and meta-analysis (2023)

World Health Organization (WHO): Healthy Diet Fact Sheet (2022)

Moss, Michael: The Extraordinary Science of Addictive Junk Food (The New York Times, 2013)



No disease that can be treated by diet should be treated with any other means
— Moses Maimonides

Thought Experiments: The Original Position

What does a fair society look like? How would we decide the rules of justice if we were completely impartial? These questions lie at the heart of John Rawls’ Original Position, one of the most influential thought experiments in modern political philosophy.

Rawls introduced this concept in A Theory of Justice (1971) as a way to determine the principles of justice that should govern a fair society. The Original Position is a hypothetical scenario in which rational individuals, stripped of personal biases and self-interest, decide the rules that will shape their society. To ensure fairness, they operate under a Veil of Ignorance, meaning they do not know their own social status, wealth, race, gender, or natural abilities.

This article explores Rawls’ Original Position, the Veil of Ignorance, its implications for justice, and criticisms of the theory.

The Original Position: A Fair Starting Point

Rawls' thought experiment asks us to imagine a rational deliberation process where individuals are tasked with designing the fundamental rules of their society. However, to ensure objectivity, these individuals do not know:

  • Their economic or social status (rich or poor, privileged or disadvantaged).

  • Their race, gender, or ethnicity.

  • Their intelligence, talents, or abilities.

  • Their personal values, religious beliefs, or conceptions of the good life.

This lack of knowledge is enforced by the Veil of Ignorance, a conceptual device ensuring that no one can craft laws that favor their particular position in society.

Rawls' core idea is that justice should not be based on luck, birth, or arbitrary advantages. If people do not know where they will end up in society, they will choose fair and equal principles that protect everyone, including the least advantaged.

Two Principles of Justice

Rawls argues that rational individuals in the Original Position would choose two key principles of justice:

1. The Equal Basic Liberties Principle

Each person is entitled to the most extensive set of equal basic liberties possible, as long as they do not infringe upon others' freedoms. These liberties include:

  • Freedom of speech and thought

  • Religious liberty

  • The right to own personal property

  • Freedom of association

  • Protection from arbitrary arrest or discrimination

This principle ensures that all individuals have fundamental rights that cannot be sacrificed for economic gain or societal efficiency.

2. The Difference Principle

Social and economic inequalities are only justifiable if they meet two conditions:

  1. They benefit the least advantaged members of society (the "maximin" rule—maximize the minimum position).

  2. They are attached to positions that are open to all under conditions of fair equality of opportunity.

This means that inequalities (such as differences in income, wealth, or status) are only acceptable if they improve the lives of the poorest and are based on merit rather than privilege.

For example, doctors may earn more than janitors, but only if the higher salaries encourage skilled individuals to become doctors, which benefits society as a whole, and that the opportunity to become a doctor is accessible to everyone, not just the wealthy or well-connected. In other words, Rawls' Difference Principle ensures that wealth and power do not concentrate at the expense of the worst-off.

Why the Original Position Matters

The Original Position is more than just an intellectual exercise—it provides a framework for thinking about fairness, justice, and policy-making. Its implications extend to politics, economics, and human rights.

1. A Model for Ethical Decision-Making

Rawls’ theory suggests that laws and policies should be designed as if they were chosen from behind a Veil of Ignorance. This is a useful tool for evaluating justice:

  • Would you support a tax system if you didn't know whether you’d be rich or poor?

  • Would you design a healthcare system if you didn’t know whether you’d be born healthy or disabled?

  • Would you allow racial discrimination if you didn’t know your own race?

2. Justifying Social Welfare & Redistribution

The Difference Principle provides a strong moral argument for progressive taxation, social safety nets, and public education. Since economic inequalities are only justified if they benefit the least advantaged, a system that lifts the poor while allowing economic incentives aligns with Rawls’ vision of justice.

3. A Defense of Liberal Democracy

Rawls’ Equal Basic Liberties Principle supports constitutional democracy, protecting freedoms such as speech, religion, and equal legal treatment. Any system that restricts fundamental rights—such as authoritarianism or caste-based discrimination—would not be chosen under the Original Position.

Criticisms of the Original Position

While Rawls’ thought experiment has been widely influential, it is not without criticism. Some philosophers argue that the Original Position is unrealistic or overly abstract.

Critics argue that people are not truly impartial and that real-world decision-making is driven by self-interest, cultural biases, and historical context. Political philosopher Robert Nozick challenged Rawls by arguing that justice should be based on individual liberty and property rights, rather than redistributive fairness.

Some economists argue that the Difference Principle discourages innovation and effort. If the talented and hardworking must always prioritise the least advantaged, will they still have incentives to excel? Libertarians argue that free markets, not government intervention, create the best opportunities for the disadvantaged.

Communitarian philosophers argue that Rawls' Veil of Ignorance ignores cultural values and traditions. Justice is not purely abstract but shaped by history, identity, and community ties.

Conclusion

Rawls' Original Position remains one of the most compelling frameworks for understanding justice and fairness. By imagining a world where no one knows their own social position, Rawls forces us to consider what a truly just society would look like.

His two principles - the Equal Basic Liberties Principle and the Difference Principle - offer a vision of a society where freedom, opportunity, and fairness coexist. While debates continue about the feasibility of Rawls' ideas, the Original Position provides a benchmark against which real-world policies can be judged.

Ultimately, Rawls' theory challenges us to ask: Would we accept the rules of society if we didn’t know where we’d end up? If not, those rules may not be just.

Corporate Renewal Part I: How great companies succumb to entropy

Written by Joss Duggan (Reading Time: 25 mins)

Every great company, no matter how dominant, is constantly battling an invisible yet relentless force: entropy. In physics, entropy describes the gradual decline of order into disorder unless energy is continuously applied. In business, the same principle holds true. Unless we’re being diligent, our organisations naturally tend towards stagnation, complexity, and finally decline; once a nimble, high-performing enterprise, now burdened with bureaucracy, risk aversion, and complacency.

Yet, business failure rarely announces itself with a single, catastrophic event. It is not an ‘explosion’ but an ‘erosion’ - a slow and often imperceptible decay. Processes that once drove efficiency become rigid barriers to change, leaders who once championed bold innovation start protecting their empires, market-leading products that once delighted customers turn into outdated relics as newer, more dynamic competitors emerge.

How did you go bankrupt? Two ways. Gradually, then suddenly...
— Ernest Hemingway

The most insidious part, is that this kind of corporate death happens slowly; creeping in day after day, so we don’t notice the gradual decay. Year after year, small compromises accumulate - decisions that prioritise short-term stability over long-term adaptability, policies designed to minimise risk rather than maximise opportunity, and an increasing reliance on what has worked in the past rather than what is required for the future. By the time these issues become undeniable, the organisation is often locked in a defensive posture, reacting to crises.

But this is where the analogy falters, because unlike death in living organisms, the death of companies is not inevitable; the most resilient recognise the warning signs early and take decisive action before decline sets in.

The question isn’t whether your company will face entropy - it will. The challenge is recognising the signs of stagnation before they become irreversible. In the article ahead, we’ll look at the life-stages companies go through, the reasons companies fail, and the warning signs that things are going terribly wrong…

Lifecycles: From Scrappy Upstart to Slow Giant

Companies rarely fail due to a lack of intelligence, talent, or resources. More often, they decline because the very mechanisms that once propelled them to success eventually become the shackles that hold them back. Growth breeds complexity, and over time, what was once a thriving, fast-moving organisation becomes slow, risk-averse, and bureaucratic. Worth mentioning that the first three stages are pre-requisites for successful businesses, but stage 4 is optional!

  1. Introduction (The Startup Phase) – A company in its infancy is defined by speed and agility. Entrepreneurs drive every decision, wearing multiple hats and adapting quickly to market needs. There is little hierarchy, and failure is seen as an essential part of the learning process. Going from zero-to-one is hard and this period is chaotic and painful.

  2. Growth (The Scale-Up Phase) – As initial success takes hold, there’s the feeling that we need to “professionalise” in order to scale effectively. Teams grow, hierarchies develop, and systems are put in place to streamline operations. While necessary for sustained expansion, this phase introduces the first signs of rigidity. Decision-making, once fast and instinctive, now requires approvals and adherence to formal processes…

  3. Maturity – The company is now an industry leader, prioritising stability over change. Efficiency and consistency become the primary focus, with structured workflows and an emphasis on reducing risk. While this approach maximises short-term profits, it gradually erodes the organisation’s ability to adapt to disruptions. Leaders who once took bold risks now spend more time protecting existing revenue streams rather than exploring new ones.

  4. Decline – The once-thriving company becomes entrenched in outdated methods, slow decision-making, and excessive bureaucracy. Market competitors, often more agile startups, begin to chip away at its dominance. Internal culture shifts from innovation to preservation, and resistance to change becomes a defining characteristic. By the time leadership acknowledges the problem, it is often too late to reverse course without significant restructuring.

It’s rare but not impossible to find leaders who can be brilliant at all stages of the life-cycle. Founders building from the ground-up, tend to have very different personalities (and risk tolerances) to those who manage the optimisation of a company at scale.

Understanding these phases is critical to put corporate failure in context. Once a start-up reaches ‘escape velocity’ and starts scaling, that’s where ironically things start to go wrong. The things that allow us to scale, are the things that become a straight-jacket and can eventually kill.

What the hell happened?! Why companies go to the wall

Every failed business has a story, but there are patterns that show up repeatedly. It could be strategic missteps, financial mismanagement, cultural decay, or market forces…but businesses that fail display the same problems against and again.

I. Strategic Failures

Losing Product-Market Fit

As a founder, you're obsessed with finding the holy grail of product market-fit (PMF), though it’s hard to tell when you’ve nailed it exactly. Building things that people don’t need is a key reason early-stage ventures go bust.

What's not written about though, is how companies can lose PMF, without even realising it, especially if customers are locked into long contracts, the revenue feedback loop is long delayed.

It's easy when you lose a big customer to chalk it up to other things outside your control, but in truth, it's because you're product or service just isn't good enough anymore and your customers are finding other ways to solve the same problem better, for less money.


II. Financial Mismanagement & Resource Allocation

Running Out of Cash

Even profitable businesses can fail if they mismanage their cash flow. High burn rates, underestimating costs, and poor fundraising strategies leave companies vulnerable when capital markets tighten. Many startups fall into the trap of raising funds at ever-higher valuations without proving profitability, only to collapse when investors demand sustainable unit economics.

Businesses must generate profit at the unit level. If customer acquisition costs (CAC) exceed customer lifetime value (LTV), or margins are too slim to cover operational expenses, the business becomes unsustainable. Many companies try to grow their way out of bad unit economics, but without a fundamental fix, they only accelerate their losses.

A company that derives too much revenue from just a few large customers is at extreme risk. If one key client leaves, it can trigger financial instability or even collapse the business overnight. Concentration risk is particularly dangerous for B2B companies that rely on a handful of major contracts. When customers hold too much power in negotiations, they can demand lower prices or extended payment terms, squeezing margins and leaving little room for reinvestment.


III. Internal Dysfunction: Leadership & Cultural Decay

Weak Leadership & Poor Execution

Strong leadership is critical in steering a business through uncertainty. Companies with indecisive, slow-moving leadership struggle to adapt to market shifts, fail to align teams with a clear vision, and often make reactive rather than strategic decisions.SILOS

Risk Aversion and Learned Helplessness

The answer to mistakes is more rules. Personalities who want control and are low on Openness to new experience are preferred. Netflix co-founder Reed Hastings has spoken extensively on why companies become bureaucratic and slow. His insight: rules are created in response to mistakes, and over time, those rules strangle the best employees.

  • Instead of allowing people to take smart risks, organizations add layers of approvals.

  • This leads to a stifling work environment where innovation is secondary to process compliance.

  • The most talented employees, who thrive in flexible, high-trust environments, leave.

  • What remains is a company filled with employees who are risk-averse and less capable of driving growth.

Toxic Company Culture & High Employee Turnover

A dysfunctional work environment drives away top talent, leaving a company with disengaged employees and reduced innovation. High turnover is not just expensive—it erodes institutional knowledge and slows down execution. A culture that rewards internal politics over merit also contributes to long-term decline.

As businesses scale, they inevitably shift from attracting high-risk, high-reward entrepreneurs to more conservative, process-driven managers. The very nature of a large corporation incentivizes stability and predictability over boldness and innovation.

  • Risk-averse employees thrive: As organizations mature, they develop incentives that reward predictability, discouraging individuals who thrive in fast-moving, uncertain environments. Employees who excel at maintaining the status quo rise through the ranks, while those who challenge norms often leave.

  • Bureaucrats over builders: Large companies increasingly favor people who are good at navigating internal politics over those who create new products, services, or markets. These bureaucrats specialize in risk mitigation and compliance, but they struggle to drive the kind of step-change innovation that propels a company forward.

  • Comfort replaces urgency: High-growth, high-impact employees are drawn to environments where they can make a difference. When a company stagnates, it repels the kind of ambitious, forward-thinking talent that could help it regain its edge.

Additionally, large companies tend to promote employees based on tenure and internal political capital rather than merit. This creates a leadership class that is more concerned with preserving their status than pushing the company forward. Over time, this results in an organization that lacks the bold decision-making needed to remain competitive.

One of the most insidious aspects of corporate hiring is that it only takes a single B-player in a key position to begin the downward spiral. B-players tend to hire C-players because they don’t want to be challenged or outperformed. This creates a slow but inevitable dilution of talent, leading to departments that underperform for years. A single bad hire, especially in leadership, can cause long-term damage, eroding the effectiveness of an entire function.


IV. Operational & Market Failures

Poor Sales & Marketing Strategy

A misaligned go-to-market strategy leads to inefficient customer acquisition. A business that overspends on marketing with low conversion rates or underinvests in sales development will struggle to sustain growth. The imbalance between CAC and LTV is a clear warning sign of a failing sales and marketing approach.

In the early days of a business, it's probable that the founder themselves is selling; then as the company matures, a sales team is hired to make these efforts scaleable. A winning mentality is crucial and this is what helps the company dominate in it's market. Revenues scale, a market equilibrium is reached, and there's less and less new business to go after...

In stagnant businesses, hard-nosed salespeople are no longer needed because existing customers generate most of the revenue. Over time, go-to-market (GTM) capabilities atrophy as new business efforts take a backseat to account management. Instead of closing deals, sales teams become "farmers," nurturing existing relationships rather than "hunters" who drive aggressive revenue growth.

While farming is important, without a healthy balance of new customer acquisition, a business will eventually erode as competitors lure customers away and market opportunities disappear.

Technological Obsolescence, Market Shifts & Losing Product Market Fit

Industries evolve, and companies must evolve with them. Businesses that fail to keep up with technological advancements risk being left behind. This is especially true in sectors like fintech, healthcare, and SaaS, where innovation cycles are rapid. Market shifts—whether due to changing consumer behavior, regulatory changes, or emerging competitors—can rapidly turn yesterday’s market leaders into today’s laggards.

Did your IT department successfully convince you that they’re a “tech” firm? CIO that want to be CTOs can be dangerous.


V. External Forces & Uncontrollable Factors

Regulatory & Compliance Issues

Regulatory blind spots can sink even the most promising businesses. Non-compliance with industry standards can lead to lawsuits, fines, and reputational damage. Heavily regulated industries, such as finance and healthcare, require businesses to navigate complex legal landscapes—failure to do so can be existentially damaging.

Macroeconomic & External Factors

Recessions, supply chain disruptions, interest rate hikes, and global crises (such as pandemics) can cripple even well-managed businesses. While external forces may be beyond a company’s control, its ability to anticipate, adapt, and respond to these challenges often determines survival. Companies that maintain strong financial discipline, diversify revenue streams, and build operational resilience are best positioned to weather such storms.

Warning Signs

Financial Red Flags

  • Cash Flow Crisis – The company is constantly worried about cash flow, struggling to make payroll, or delaying supplier payments.

  • Declining Revenue & Profitability – There’s a clear downward trend in revenue, and margins are shrinking.

  • Missed Targets – The company repeatedly misses sales, revenue, or profit targets, but leadership keeps moving the goalposts.

  • Mounting Debt & Covenant Breaches – The company is relying heavily on debt, struggling with interest payments, or breaching bank covenants.

  • Asset Sales & Cost-Cutting – Fire sales of assets, layoffs, or drastic budget cuts signal short-term survival tactics rather than long-term growth strategy.


Operational Symptoms

  • Bloated Cost Base – High overhead costs, unnecessary expenses, or an overstaffed, inefficient workforce.

  • Underutilized Assets – The company has expensive infrastructure, software, or teams that aren't delivering proportional value.

  • Inefficient Processes – Internal bottlenecks, excessive bureaucracy, or legacy systems that hinder agility.

  • Lack of Product Innovation – The company is resting on past successes and has no compelling new products or services.

  • High Customer Churn – Customers are leaving faster than they’re being acquired, often due to declining service levels or outdated offerings.

  • IT Failures - Everyone wants to be in ‘Tech’, even the IT guys. Repeatedly this leads to the IT department underestimating the size of a challenge and jeopardising millions in investment.


Cultural & Leadership Indicators

  • Frequent Leadership Changes – A revolving door of CEOs, CFOs, or senior executives is a sign of instability.

  • Erosion of Trust – Employees and investors have lost confidence in leadership.

  • Defensive Leadership – Executives become overly protective, refusing to acknowledge problems or blaming external factors.

  • Fear-Based Culture – Employees are anxious about layoffs, reluctant to take risks, and disengaged.

  • Talent Drain – High performers are leaving for better opportunities, often replaced by less experienced hires.


Market & Competitive Position

  • Competitors Are Pulling Ahead – Rivals are innovating faster, taking market share, or undercutting on price.

  • Loss of Strategic Direction – No clear growth strategy, just reactive moves to stay afloat.

  • Desperate Partnerships – The company is forming alliances that seem like short-term cash grabs rather than strategic plays.

  • Regulatory or Legal Issues – Increased scrutiny, compliance problems, or lawsuits impacting operations.


Investor & Boardroom Signals

  • Activist Investors or Private Equity Interest – If PE firms or activists start circling, they see an opportunity to turn things around.

  • Panic from the Board – A suddenly more engaged board, especially pushing for major changes, is a sign of deep concern.

  • Dramatic Restructuring Plans – Unplanned pivots, major layoffs, or radical cost-cutting measures point to survival tactics.


Awareness is the First Line of Defence

No company is immune to entropy. No matter how dominant a business may seem, the forces of stagnation and decay are always at work, slowly eroding its agility, culture, and competitive edge. The difference between companies that endure and those that collapse is not intelligence, resources, or past success - it is the ability to recognise and resist entropy before it takes hold.

The uncomfortable truth is that decline rarely announces itself with flashing warning lights. Instead, it manifests in small, almost imperceptible shifts, and like the frog boiling slowly in the water, no-one takes bold action when the rate of decay is so slow. Great companies recognise that the cost of maintaining the status quo is often far greater than the risk of transformation.

The question isn’t whether entropy will creep into your business—it will. The real challenge is whether you have the courage, clarity, and conviction to resist it before it’s too late.

In Part II, we’ll look at how companies can inoculate themselves from entropy and be constantly developing new upswings. We’ll then finish our series in Part III with a playbook for Turnaround Management and how CEOs can save a company in distress.


Further Reading

Bibeault, Donald: Corporate Turnaround: How Managers Turn Losers into Winners (1982)

Christensen, Clayton: The Innovator’s Dilemma (1997)

Christensen, Clayton & Raynor, Michael: The Innovator’s Solution (2003)

Collins, Jim: Good to Great: Why Some Companies Make the Leap... and Others Don’t (2001)

Hastings, Reed & Meyer, Erin: No Rules Rules: Netflix and the Culture of Reinvention (2020)

Lafley, A.G. & Martin, Roger: Playing to Win: How Strategy Really Works (2013)

Marquet, L. David: Turn the Ship Around!: A True Story of Turning Followers into Leaders (2012)

O’Callaghan, Shaun: Turnaround Leadership: Making Decisions, Rebuilding Trust and Delivering Results After a Crisis (2010)

Shein, James: Reversing the Slide: A Strategic Guide to Turnarounds and Corporate Renewal (2011)

Slater, Stuart; Lovett, David & Barlow, Peter: Leading Corporate Turnaround: How Leaders Fix Troubled Companies (2011)


Only Human: A Primer on Bioethics

Only Human: A Primer on Bioethics

I started researching a more detailed piece on applied bioethics and inn the process realised I needed to outline (for myself, let alone anyone else!) what Bioethics is in itself before I could meaningfully explore a real-world problem. So, to contextualise future debate in Medical Ethics, let’s situate ourselves:

  • Three modes of ethics (Meta-ethics, Normative, Applied)

  • Applied Ethics (Environmental, Business and Medicine)

  • Bioethics and the Life Sciences

  • Principlism as a common framework

    • Respect for Autonomy

    • Non-Maleficicence

    • Beneficience

    • Distributive Justice

Metaphorical Minds: How we describe mental life

Written by Joss Duggan (Reading Time: 15 mins)

The Mind’s Changing Reflection

For thousands of years, we've looked at the tools we build and seen ourselves in them. Every era’s most advanced technology has shaped the way we imagine the human mind - an aqueduct, a clock, a telephone switchboard, a computer. We build machines to enhance our world, but in the process, we use them to describe our inner world as well.

This is more than just a linguistic trick. The metaphors we choose for the mind shape how we try to understand it. They dictate the questions we ask, the limits we impose, the very possibilities we consider.

But what if, in doing so, we are trapping ourselves in the assumptions of our own time?

If the best technology of today is how we explain the mind, will our current models - computation, AI, neural networks - eventually seem as quaint as describing emotions as flowing humours, or ideas as steam?


Ancient Metaphors: Water, Pneuma, and Mechanical Motion

Going with the flow…

Long before circuits and algorithms, ancient civilisations sought to explain thought through what they understood best: the movement of natural forces.

For the Greeks and Romans, the dominant metaphor was flow. The body was a system of humours, fluids coursing through channels like an aqueduct distributing water to a city. The mind was governed by the balance of these flows - an excess of bile or blood leading to moods, energy, or sluggishness. To manipulate thought or emotion was to control this movement, much like a physician or engineer adjusting the flow of a system.

“You can’t step into the same river twice”
— Heraclitus

Another early metaphor came from the Stoics, who described the mind as powered by pneuma, a kind of vital air or breath that animated the body. Thought was not something static but a force in motion, like wind through an instrument. This model influenced early medical and philosophical theories, shaping concepts of human vitality and agency.

Then came the mechanical metaphor, shaped by the growing sophistication of early engineering. Greek thinkers like Hero of Alexandria, who built early automatons, compared cognition to a catapult - a process of loading, aiming, and firing ideas into action. Thought, in this view, was a mechanical force: primed, released, and directed.

This metaphor expanded as early automata and clockwork devices became more refined, suggesting that the brain might operate under predictable physical laws, much like engineered structures. Ancient Chinese and Indian scholars also developed similar analogies, seeing the mind as an intricate mechanism that could be tuned or balanced to maintain order and function.

The Takeaway: The earliest metaphors tied cognition to the natural world - fluid motion, breath, and force - reflecting an intuitive understanding of the body as part of the environment.




The Clockwork Mind: Mechanisation and Dualism

Wind-up merchants…

The Renaissance and Enlightenment saw the rise of clockwork machines, some of the first human-made devices capable of precise, predictable motion. It was only natural that the metaphor shifted accordingly.

Descartes provided one of the most influential versions: the body as a mechanical system, controlled by a separate, rational mind - the famous ghost in the machine. He likened the nervous system to pipes and levers, the body responding like an automaton, with the soul as the unseen pilot. This dualistic perspective laid the groundwork for centuries of debate about the mind-body relationship.

Unsurprisingly, as clocks became more advanced, so did the metaphor. The brain was no longer just a machine - it was a self-regulating gear system, a complex but deterministic mechanism grinding away beneath the surface of conscious thought. The precision of mechanical timepieces inspired thinkers to conceptualise cognition as a series of interlocking parts, each functioning predictably and governed by logical principles.

The body is a machine, the mind its pilot
— René Descartes

This period of history also introduced mechanical automata, complex devices mimicking human movement and action. As engineers like Jacques de Vaucanson created lifelike machines that moved, played instruments, and mimicked speech, the idea that cognition could be reduced to mechanical interactions gained traction. Thinkers like Leibniz even suggested that if one could build a machine small enough, one might peer inside the workings of thought itself.

The Takeaway: The clockwork metaphor introduced the idea of deterministic cognition - a structured, rule-based system that ticked along predictably, setting the stage for later computational views.



The Steam-Powered Psyche: Thermodynamics and Mental Energy

With the Industrial Revolution came steam engines, which fundamentally changed how people viewed power, work, and motion. Just as machines needed fuel to produce energy, so too did the human mind seem to function under principles of energy conservation, release, and conversion.

Freud’s model of the psyche was deeply influenced by thermodynamic principles. He saw the mind as a pressure system, where unconscious drives built up like steam in a closed chamber. If not released through appropriate channels, these pressures could result in emotional distress or breakdowns. Neuroses were viewed as blockages in this system, requiring controlled release through therapy to restore equilibrium.

Beyond Freud, other psychologists adopted similar energetic metaphors. Mental effort was increasingly described in energetic terms - exhaustion as a depletion of cognitive resources, focus as an investment of energy, and emotional distress as a buildup of unresolved pressures.

By the late 19th century, Wilhelm Wundt and others explored reaction times and neural excitations in terms of energy conversion, seeing the brain as a metabolic system where cognitive exertion followed physical principles.

The Takeaway: The industrial age introduced the idea of the mind as an energy system, where thoughts and emotions operated under laws of thermodynamics.

The Mind as a Telegraph and Telephone Switchboard

Switching it up…

The rise of telephone switchboards in the late 19th and early 20th centuries revolutionized communication, and with it came a new metaphor for the mind. Just as operators connected calls by plugging and unplugging wires, early neuroscientists imagined the brain as a vast switchboard, relaying signals between different regions to produce thought, memory, and action.

One of the most significant proponents of this idea was Charles Sherrington, who described the nervous system as an “integrative action” of circuits, with neurons acting as electrical relays. His work laid the foundation for the modern understanding of synaptic transmission, in which thoughts are transferred like calls bouncing from switchboard to switchboard across the brain’s vast network.

Man is but a network of relationships and circuits
— Norbert Weiner

This analogy also paralleled the development of cybernetics in the mid-20th century. Thinkers like Norbert Wiener extended the metaphor, likening the human mind to an information-processing system—capable of adjusting, rerouting, and optimizing signals based on feedback, much like an advanced switchboard operator directing an overwhelming influx of calls.

Yet, the metaphor had its limitations. Unlike a telephone exchange, which relies on deliberate human input, the brain is not merely a passive relay station—it adapts, learns, and sometimes misroutes information in ways that a rigid switchboard could never do. Nonetheless, this concept paved the way for later computational theories of mind and early AI research, reinforcing the idea that intelligence could be mechanized and optimized through structured connections.

The Takeaway: The rise of electrical networks reinforced the idea of thought as structured signal transmission, a precursor to modern neuroscience and AI models.



The Mind as a Computer: Symbolic Processing and AI

Have you tried turning it off, and on again?

As the 20th century progressed, computers emerged as the dominant technology, and with them came a radical shift in how we conceived of the mind. No longer a system of mechanical parts or flowing energy, cognition was now understood as symbolic processing - a set of rules for manipulating information, akin to the logic gates of a machine.

The Turing Machine, conceptualized by Alan Turing in the 1930s, provided a foundation for this metaphor. Turing proposed that computation could be broken down into discrete, programmable steps, leading to the idea that thought itself could be reduced to a series of logical operations. This view fueled the rise of cognitive science, where researchers likened memory to digital storage, reasoning to computational algorithms, and perception to data input.

A man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine
— Alan Turing

This model became dominant in the mid-20th century, particularly with the advent of artificial intelligence (AI). Early AI pioneers, such as John McCarthy and Marvin Minsky, pursued the dream of creating machines that could think like humans by encoding knowledge as a series of rules and logical structures. The mind as a computer became the standard metaphor in psychology, neuroscience, and philosophy of mind.

Yet, as AI developed, it became clear that human cognition was not simply a matter of logical processing. The failures of early AI to replicate human creativity, intuition, and learning forced a re-evaluation of the computational metaphor.

The Takeaway: The computer metaphor dominated the 20th century, but it struggled to fully explain human cognition, leading researchers to explore more dynamic and adaptive models.




The Neural Network Model: The Mind as an Adaptive System

The brain’s original LAN party

The metaphor of the brain as a computer began to falter as neuroscientists realized that cognition was not a rigid, rule-based process but an adaptive, dynamic system. Enter neural networks, a conceptual shift that mirrored how the brain actually functions—not as a series of predefined circuits, but as a constantly evolving system of connections that strengthen or weaken over time.

Unlike traditional computational models, which rely on explicitly programmed logic, neural networks operate through learning and experience. Inspired by the biological structure of neurons, artificial neural networks mimic how synapses adjust their strength based on repeated patterns of activity, leading to the development of deep learning and artificial intelligence that can recognize patterns, make predictions, and even generate new information.

The brain is the most complex thing we have yet discovered in our universe
— Michio Kaku

This shift reflected a deeper understanding of neuroplasticity, the brain’s ability to rewire itself in response to injury, learning, and environmental changes. Rather than functioning like a static, mechanical system, the brain reconfigures itself over time, making intelligence more about adaptation than computation.

However, while neural networks have revolutionised AI, they have also highlighted limitations in our understanding of consciousness. AI systems can process vast amounts of data and optimise performance through feedback, but do they understand? Unlike the human brain, which integrates sensory experience, emotion, and abstract reasoning, artificial neural networks lack an intrinsic sense of meaning—they recognise patterns but do not perceive them.

Despite this, the neural network metaphor has been instrumental in reshaping modern neuroscience, leading to breakthroughs in deep learning, cognitive psychology, and even theories of memory formation and decision-making.

The Takeaway: The neural network model moved beyond the rigid computational metaphor, suggesting that cognition is fundamentally about adaptation and experience-driven learning.


The Quantum Mind: A New Frontier?

While neural networks offer a more biologically plausible model of cognition, they still operate within classical physics. However, recent explorations into quantum mechanics have led some researchers to propose that consciousness itself may be a quantum phenomenon.

The Orchestrated Objective Reduction (Orch-OR) theory, developed by physicist Roger Penrose and anesthesiologist Stuart Hameroff, suggests that consciousness arises from quantum processes within microtubules in neurons. Unlike classical computing, where bits exist in states of 0 or 1, quantum computing allows for superpositions, where multiple possibilities can exist simultaneously.

If cognition operates at a quantum level, it could explain aspects of human thought that remain mysterious—such as intuition, creativity, and even free will. While still controversial, the quantum mind hypothesis forces us to ask: Have all previous metaphors been insufficient because they rely on classical mechanics, while the mind itself may operate in a fundamentally different way?

The Takeaway: If consciousness has quantum underpinnings, then all previous mechanical metaphors may be inadequate, requiring a radical new framework for understanding the mind.




Conclusion: Beyond Metaphors?

From flowing humours to clockwork gears, from telegraph circuits to neural networks, our attempts to understand consciousness have always been shaped by the technology of the time. Each metaphor has provided valuable insights but has also imposed limitations, framing cognition within the boundaries of human invention.

But what if consciousness isn’t like any of these things? What if the mind is not a machine, not an energy system, not a computer, but something entirely different - something we have yet to fully conceptualise?

As science progresses, new paradigms will emerge, bringing new metaphors with them. Perhaps future discoveries will force us to abandon technological analogies altogether, replacing them with something closer to reality. Until then, we will continue to peer into the mind’s depths, using the tools at our disposal to try and capture something that may, ultimately, be beyond metaphor itself.

The Final Takeaway: Our metaphors for the mind have evolved with our technology, but the ultimate nature of consciousness may require an understanding beyond machines, networks, or computations - something we have yet to even imagine.


To Sum Up

  • Rivers and Aqueducts – Thought flows like water through channels, requiring structure but always moving (Heraclitus)

  • Ballistics and Catapults – Ideas are launched like projectiles, calculated but influenced by external forces (Ancient Ballistics)

  • Clockwork Mechanisms – The brain is a system of gears and levers, ticking predictably (Descartes)

  • Steam and Pressure Systems – The psyche builds up mental energy like a closed system that must be released (Freud)

  • Electrical Circuits and Switchboards – Thought operates through pathways of connection, relaying signals like a vast network (Sherrington)

  • Computers and Algorithms – Computation explains intelligence (Putnam & Fodor)

  • Neural Networks – The mind isn’t programmed; it learns rewires, and adapts.

  • Quantum Computing – If the brain operates at a quantum level, all previous metaphors might be obsolete (Roger Penrose)

  • Final Thought – Every era’s technology reshapes our view of the mind. But what if the mind is not like any of our inventions? We may need an entirely new paradigm - or to accept that some things simply defy metaphor.

Further Reading

  • Descartes, René: Discourse on Method (1637)

  • Freud, Sigmund: The Interpretation of Dreams (1899)

  • Hameroff, Stuart & Penrose, Roger: Consciousness in the Universe: A Review of the 'Orch OR' Theory (2014)

  • Hebb, Donald O.: The Organization of Behavior: A Neuropsychological Theory (1949)

  • Kurzweil, Ray: How to Create a Mind: The Secret of Human Thought Revealed (2012)

  • McCulloch, Warren & Pitts, Walter: A Logical Calculus of the Ideas Immanent in Nervous Activity (1943)

  • Searle, John: The Rediscovery of the Mind (1992)

  • Sherrington, Charles: Man on His Nature (1940)

  • Turing, Alan: Computing Machinery and Intelligence (1950)

  • Wiener, Norbert: Cybernetics: Or Control and Communication in the Animal and the Machine (1948)

Balancing Interests: The Agency Theory of Management

A FIRST ORDER PROBLEM

There are some issues, which are also fundamental and deep-rooted, I refer them as "first order problems". In other words, the few root causes that are in turn responsible for so many of the issues that we face today as society.

If we ask ourselves, what is the fundamental underlying principle, upon which we run our companies; the idea that is so widespread and implicit that we never actually think about it, that idea would be the agency theory of management, otherwise known as shareholder primacy.

Over the course of the last 40 years this idea has become so deeply embedded in modern corporate governance, it never even gets questioned. The interest of bringing this idea out of the shadows and into the light, we're going to discuss:

  • Where did this idea come from in the first place?

  • How did it become so deeply embedded in how we do business?

  • What are the unintended consequences that society has been asked to shoulder?

  • What are the ethical frameworks that help us situate these ideas?

  • What are the alternatives ways of running companies to balance social interests?

THE SEED OF AN IDEA

Back in the 1970's, two eminent academics both arrived (separately and independently) at the realisation that there is a fundamental disconnect between those who "own" companies and those who "operate" companies.

Whether companies are public or private, the "principal" is the party who legally owns the company, and must live with the positive or negative consequences of business outcomes; it could be the private owner(s) of a company, or the collective shareholders in a publicly listed company.

The 'principal' delegates authority for everyday decision-making to an 'agent', who in this case would be the organisation's management team - responsible for developing and executing strategy on behalf of the owners. At its most reductive, the role of the agent is to maximise the value created for the Principal.

The tricky part comes, when the incentives of these two groups are not well-aligned. Whilst agents may have a "fiduciary responsibility" to the owners, the sometimes problematic interplay between these two parties, has become known as the 'principal-agent problem'.

The theory was codified in a groundbreaking 1976 paper by Michael C. Jensen and William H. Meckling, who described the ways in which agency problems arise and how they might be mitigated through monitoring, incentives, and bonding mechanisms. The idea gained traction quickly, transforming corporate governance with a clear message: managers must be watched and incentivised to ensure they act in the owners’ interests.


THE SEED DISEMINATES

By the late 1970s, business schools around the world (and especially in the United States) had adopted agency theory as part of their core curricula. It became a fundamental lesson in MBA programs, embedded in courses on corporate finance, management, and organisational behaviour. The theory’s appeal was in no small part due to its simplicity—it offered a tidy framework for understanding the separation between ownership and control, without having to deal with the complexities of weighing-up potentially conflicting interests between multiple stakeholders.

Shareholders reigned supreme, all other interests came a distant second....

As graduates from these programs moved into senior positions in industry, and the theory caught the imagination of consultants at McKinsey, BCG, Bain et al; these principles took hold in the boardrooms of companies across all industries. Corporate governance structures were reshaped, often with an intense focus on aligning the interests of managers with those of shareholders; performance-based pay, stock options, and executive bonuses became the new norms, designed to tether managerial ambitions to company performance.

A purely economic problem, with a simple and elegant economic solution. Align all incentives to the principal.

UNINTENDED CONSEQUENCES

By the 1990's, Agency Theory had become ubiquitous and went unchallenged; but like any idea taken to an extreme, its flaws started to surface through the unintended consequences it created:

  1. Short-Termism and Tunnel Vision

Whether intentional or not, the emphasis on maximising value for shareholders has driven short-term thinking and a myopic focus on share price.

Executives under pressure to meet quarterly earnings targets often prioritise actions that boost immediate results (i.e., cost-cutting, share buybacks, or even strategic mergers that look good on paper but might not add long-term value) rather than decisions that may not pay dividends (literally) for year to come, but are nonetheless in the long-term interest of the business.

This approach has all kinds of negative consequences, not least of which is the dramatically increased risk of disruption from new market entrants. Companies that fail to invest and place bets on long-term innovation are doomed to fail.


2. Excessive Executive Compensation

According to the Economic Policy Institute, CEO compensation rose 1,400% above inflation between 1978 and 2021, while the average worker’s pay increased by just 18% over the same period. This stark contrast highlights a growing imbalance that has deepened economic inequality more than anyone anticipated. While it’s true that top executives have significant leverage to create value, the widening gap between their compensation and that of their employees raises important questions about fairness and sustainability.

In many publicly-listed companies, executive pay is directly linked to share price performance. On the surface, this seems reasonable; after all, it aligns the interests of CEOs with those of shareholders. However, this approach oversimplifies what should be a nuanced set of incentives. It often drives executives to prioritise short-term stock price gains, even when those decisions come at the expense of long-term stability, growth, or the well-being of employees.

This pay structure has contributed to a significant shift in corporate culture, reinforcing a system where executive enrichment and shareholder returns come first, while the contributions and well-being of other stakeholders often take a back seat. The result is not only a wider economic divide but a diminishing sense of shared purpose within organisations. Addressing this imbalance is essential for building a more inclusive and sustainable corporate landscape, one that values all contributors and measures success by more than just the numbers on a stock ticker.

3. Increased Risk-Taking

When incentive structures become extreme, they can drive excessive risk-taking as executives strive to meet ambitious performance targets. These structures, often tied heavily to short-term stock performance, can create a mindset where immediate gains outweigh sustainable growth or long-term health. This focus can lead executives to prioritise high-stakes strategies that may boost short-term metrics but introduce significant vulnerabilities into the business model.

We saw the consequences of this behaviour play out dramatically during the 2008 financial crisis. Financial institutions pursued risky products and leveraged complex financial instruments to inflate profits and meet targets, driven by a culture where executive compensation was closely tied to immediate performance. In this environment, long-term risks were overlooked, and the ripple effects of these choices were underestimated or ignored. The relentless chase for short-term rewards ultimately contributed to widespread economic destabilisation.

4. Neglect of Broader Stakeholders

One of the most significant issues with agency theory is its focus on shareholder primacy, often sidelining other vital stakeholders. When companies concentrate solely on boosting short-term profits, employees are frequently affected through cost-cutting measures like layoffs and reduced benefits. While these strategies may temporarily inflate earnings, they erode trust, loyalty, and innovation—essential elements for long-term growth. A disengaged workforce isn’t just a number on a balance sheet; it’s a missed opportunity for creativity and resilience.

The environmental impact is another overlooked consequence. Companies driven by profit-first thinking might meet the bare minimum of environmental regulations but skip genuine, impactful sustainability efforts. This approach leads to practices that can harm ecosystems and contribute to climate change—problems society at large must then address. Similarly, community interests can suffer when businesses make decisions that, while profitable in the short term, lead to job losses or reduced economic stability in their local areas.

This narrow approach risks alienating employees, communities, and even loyal customers, chipping away at the trust that sustains a business. The antidote lies in embracing frameworks like stakeholder theory and Confucian ethics, which encourage leaders to value people and the planet alongside profit. Companies that weave these principles into their practices don’t just create shareholder value—they build robust, lasting legacies that thrive on integrity and care for all.

ADDING NUANCE TO THE CONVERSATION

If agency theory has taught us anything, it’s that simplicity can be seductive—but dangerous when taken too far. The laser focus on shareholder primacy has given us short-term wins and, let’s be honest, plenty of headaches. So, where do we go from here? Thankfully, we're spoilt for choice with a host of thinkers who offer richer, more balanced ways to lead. Enter Confucian ethics, Rawls’ theory of justice, social contract theory, and stakeholder theory—ideas that nudge us to zoom out and see the bigger picture.

Confucian Ethics calls on leaders to think beyond their own bottom line and lead with virtues like benevolence and integrity. Imagine a corporate world where decisions aren't just measured in quarterly gains but in trust built and relationships nurtured. That’s not just feel-good fluff; it's long-term strategy. Companies rooted in these values make choices that sustain growth without sacrificing the well-being of employees, customers, or the broader community. A leader with a Confucian mindset understands that sustainable success is about creating a harmonious whole, not just scoring wins at the expense of others.

Then we have Rawls’ Theory of Justice, which shines a spotlight on fairness. Rawls argued that inequalities are only justifiable if they lift up those who are worst off. Now, look at today’s executive pay scales—it's hard not to see a disconnect. In a Rawlsian world, we'd be talking about capping excessive executive compensation and crafting pay structures that don’t just benefit a handful at the top but spread success more evenly. The ripple effect? A more balanced workplace that feels less like a gladiatorial arena and more like a community working toward shared goals.

Social Contract Theory, that age-old reminder that businesses don’t exist in a vacuum, feels more relevant than ever. Rousseau would tell us that there’s an unspoken agreement between businesses and the society that sustains them. Companies that prioritize short-term profit at the expense of their communities are breaking that contract. Picture a world where businesses invest in fair labor practices, meaningful sustainability efforts, and community support—not just to check a box, but because they see themselves as partners in society’s long game.

And let’s not forget Stakeholder Theory, brought into the spotlight by R. Edward Freeman. This isn’t just a theory; it’s a call to action. It suggests companies should aim to create value for all stakeholders—employees, customers, suppliers, even the environment—not just shareholders. What would this look like in practice? It’s more transparent reporting, boardrooms that look like the world they serve, and decisions that weigh social and environmental impacts as heavily as financial ones. It’s about asking not just “How will this affect our stock price?” but “How will this affect everyone who has a stake in our future?”

We’ve clung to agency theory as if it were the only story worth telling. But the world is shifting, and our playbook should too. Drawing on these rich perspectives doesn’t just add color to the grey areas; it changes the entire landscape. Because when we lead with empathy, fairness, and a genuine respect for all stakeholders, we don’t just create profitable companies—we build legacies worth celebrating.

So what can we do differently?

The good news is, there are plenty of companies who are actively working to balance out their incentive structure so management are taking a broader and more holistic view of success.

1. Redesign Incentive Structures for the Long Haul

Rethink compensation packages to go beyond quick wins. For instance, instead of only tying bonuses to quarterly earnings, introduce metrics that reward sustained progress, such as customer retention, employee engagement scores, and reduction in carbon emissions. Companies like Unilever have done this by linking executive pay to their Sustainable Living Plan targets, which include environmental and social benchmarks. This encourages leaders to prioritise actions that create long-term value, not just immediate profit.

2. Redefine Success to Look Beyond the Balance Sheet

Expand what it means to be successful by incorporating measurable non-financial outcomes. For example, Patagonia consistently reports on its impact on local communities and environmental projects, showcasing its commitment to the planet. Companies can start small by including employee satisfaction surveys, community impact metrics, or sustainability reports in their annual disclosures. These tangible measures build trust and prove that success is about more than numbers on a balance sheet.

3. Cultivate Ethical Leadership That Walks the Talk

To instill ethical leadership, build programs that go beyond theory and focus on real-world application. For instance, launch a mentorship initiative where senior leaders coach managers on how to make decisions that balance profit with integrity. Danone has introduced a leadership framework that includes training on social responsibility, ensuring that leaders are equipped to make empathetic, fair, and community-minded choices. This tangible approach reinforces that leadership is about more than just results—it’s about leading with purpose.

4. Champion Transparency Like It’s a Competitive Advantage

Adopt transparency as a core value, not an afterthought. For example, Salesforce provides clear disclosures on its diversity statistics and progress toward equality goals, which has strengthened stakeholder trust. Companies could also follow the lead of Bank of America, which includes detailed executive compensation reports and rationales in their annual proxy statements. These actions demystify decision-making processes and help bridge the trust gap between executives and other stakeholders.

Ultimately though, board culture is the greatest determinant of whether these issues get onto the agenda. Having a collaborative and diverse board leads to higher-quality conversations that bring in critical perspectives that are often lacking. The companies that will win in the long-term, are those that are infusing this thinking into the DNA of the organisation so whether consciously or not, executives are building for the long-term benefit of all stakeholders.

to sum up (the TLDR)

  • Agency theory's focus on shareholder primacy has driven short-term gains but has contributed to deep-rooted problems like economic inequality, short-termism, and the neglect of broader stakeholder groups.

  • The theory gained widespread influence through business schools and corporate consultants in the 1970s and 1980s, embedding it deeply in modern corporate governance.

  • This narrow focus has fuelled excessive executive compensation, increased risk-taking, and eroded trust within organisations, with significant societal and environmental impacts.

  • We can draw on Confucian ethics, Rawls' theory of justice and social contract theory, develop more nuanced and balanced approach for sustainable results

  • Boards should consider incentive structures, greater employee participation in wealth creation, redefining success to include social outcomes, fostering ethical leadership, championing transparency, and ensuring board diversity for long-term stakeholder value.

Further Reading

Economic Policy Institute (2022) CEO pay has skyrocketed 1,460% since 1978: CEOs were paid 399 times as much as a typical worker in 2021. Available at: https://www.epi.org/publication/ceo-pay-in-2021/ [Accessed 8 November 2024].

Fleming, P. and Jones, M. T. (2013) The End of Corporate Social Responsibility: Crisis and Critique. London: SAGE Publications.

Freeman, R. E. (2010) Strategic Management: A Stakeholder Approach. Cambridge: Cambridge University Press.

Henderson, R. (2020) Reimagining Capitalism in a World on Fire. New York: PublicAffairs.

Jackall, R. (2010) Moral Mazes: The World of Corporate Managers. New York: Oxford University Press.

Low, K. C. P. (2013) Confucianism and Modern Management. Singapore: Springer.

Ross, S. A. (1973) Origin of the Theory of Agency: An Account By One of the Theory's Originators. Available at: https://www.researchgate.net/publication/228124397_Origin_of_the_Theory_of_Agency_An_Account_By_One_of_the_Theory's_Originators [Accessed 8 November 2024].

Rawls, J. (2001) Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press.

Stout, L. A. (2012) The Shareholder Value Myth: How Putting Shareholders First Harms Investors, Corporations, and the Public. San Francisco: Berrett-Koehler Publishers.

Wheatley, M. J. (2006) Leadership and the New Science: Discovering Order in a Chaotic World. 3rd edn. San Francisco: Berrett-Koehler Publishers.

The Philosopher's Seat (at the table)

Alright! You’ve got me. Plato never actually stepped foot inside a boardroom. He never read a board-pack, chaired an audit committee nor dialled into a conference call. But his influence on what happens in the modern boardroom is deepening over time and becoming ever more clear. 

Plato is rightly considered the father of philosophy. Whilst several came before him, Plato laid the foundations for western philosophical thought and basically founded political philosophy when he wrote The Republic. In fact some even gone so far as to say:

The safest general characterisation of the European philosophical tradition is that it consists of a series of footnotes to Plato
— Alfred North-Whitehead

Call me dramatic, but I truly believe that we’re making decisions now that will affect how we live for the next 1000 years. If we take a step back and think about where we stand as a species, I’d argue that humanity is at a transformative moment; defined by the dual forces of extraordinary progress and profound challenges.

We’ve reached an era where AI can revolutionise industries and connect billions, yet it brings the looming spectre of mass unemployment. Climate change threatens our very existence, while the liberal democratic progress we believed to be inevitable, has been proven otherwise under the strain of deep polarisation and division. Capitalism, once the undisputed engine of economic progress, now stands at a crossroads, criticised for prioritising short-term profit over long-term sustainability.

Set against this backdrop, business leaders now more than ever, need more than commercial acumen; they need a strong moral compass and philosophical underpinnings to navigate the complexities ahead.

an Ethical blindspot in Business Strategy

Philosophical objectivism may be closer it appears…

While businesses may have mastered the art of making money, this often comes with unseen costs, as the relentless pursuit of growth often sidelines ethical considerations and social responsibility.

As ever, new technologies have brought significant advancements and allowed companies to drive efficiencies - but as AI and Robotics become more sophisticated, it’s also raised important questions:

  1. What do we do when unchecked growth exacerbates inequality?

  2. What happens when these efficiencies lead to job losses on a massive scale?

  3. Who is responsible for answering these questions on behalf of society?

In the corporate world, focusing solely on quarterly earnings and shareholder value risks amplifying these challenges. Corporate Social Responsibility (CSR) appeared for a while as a bolt-on; repackaged and rebranded as “ESG”, it’s most often part of the marketing department and has no real teeth to affect change.

Leaders who ignore the long-term impacts of their actions may find themselves contributing to societal instability that even the most robust profits cannot buffer against.

This is where a new mindset is needed - one that integrates sustainability and ethical principles as core business practices rather than afterthoughts.

Guiding Modern Leaders

Boards play a critical role in guiding leadership

It hasn’t always been this way, but the prevailing wisdom since the 1970’s has been “The Agency Theory of Management”, which is a fancy way of saying: “maximise profit for shareholders at the detriment of all others”, but business leaders cannot just be profit-maximisers.

The role of a modern leader should be to look beyond the immediate gains and embrace a broader view that considers the well-being of all stakeholders—employees, communities, and the environment. Success needs to be redefined to include the positive impacts a company makes on society and its resilience in facing external challenges.

Businesses that embed accountability and social responsibility within their operations create not just stronger companies but stronger societies. They position themselves to weather economic fluctuations and societal shifts with greater adaptability. The evolution from “business as usual” to ethical stewardship is not just an ideal; it’s fast becoming a competitive advantage.

Consciously moving Beyond the Profit Motive

The traditional model that prioritises profit at any cost has reached its limits. Both Executive Directors and their Boards, must step into the role of ethical stewards, setting the tone for responsible corporate behaviour that prioritises long-term value over short-term gain. This approach doesn’t mean abandoning profitability; it means expanding the definition of success to align with values that benefit more than just shareholders.

We can already see this shift in forward-thinking companies. B-Corps, for instance, strive to balance purpose and profit, making decisions that consider their broader impact. Some tech companies are investing heavily in training and upskilling their workforce to prepare for the shifts that AI and automation will bring. These examples show that responsible growth is not only possible but essential.

Ethical considerations can no longer be optional in the boardroom; they must be integral to decision-making. Leaders who adopt a holistic approach, incorporating social, environmental, and economic factors, are better equipped to navigate complex global challenges. The essence of leadership today lies in asking deeper questions and guiding companies to answers that benefit everyone.

The Mission statement

“I had lost the ability to bullshit…” - Jerry Maguire

When I studied Philosophy at university, I really had no idea how useful it would be in life.

Yet, every year that passes I realise more and more how relevant and interconnected it is with the world around us, and particularly in business. It looks to address all the most important issues that humanity faces today; Bioethics, Artificial Intelligence, Climate Change, not to mention the nature of capitalist economics

It’s been really heartening to see that over the last 5 years in particular, there has been a renewed interest in Philosophy, with writers like Alain De Botton, Mark Manson and Ryan Holiday bringing ancient wisdom to the question “how can we lead better lives”.

What I think is missing though, is the application of these ideas to modern business, and the wide-ranging societal implications that are going to shape how we live for millennia to come.

Plato in the Boardroom, is my attempt to explore the intersection of philosophy and business (for my own edification if nothing else) by looking at the moral dilemmas posed by a rapidly changing world.

So how do we collectively step-up and lead ourselves through these tectonic shifts?

Some of the best answers, I strongly suspect, are to be found in both ancient, enlightenment, and modern thinking can provide clarity and direction. As we embark on this path, I invite you to reflect: What kind of future are we building, and is it one we’d all want to live in?

Thanks for coming on the journey with me!

Thought Experiments: Plato's Cave

Written by Joss Duggan (Reading Time: 7 mins)

For obvious reasons, I had to start our series on Thought Experiments with the great man himself and explore Plato's allegory of the cave. It appears in book seven of The Republic wherein Plato discusses the nature of reality with his older brother, Glaucon, and his mentor, Socrates.

This particular part of Plato's writings has been discussed and debated for centuries and is still the starting point for many an undergraduate course in Philosophy because it's such a great jumping-off point into the wider canon of philosophical literature. It also now raises new questions when we link it with modern ideas about honesty and transparency in the workplace, and our own skepticism and rejection of the unknown. But before we get into all of that, here's the story...

The Cave

Imagine yourself in a deep, dark cave, far beneath the cold ground. Water drips down off the ceiling where prisoners sit, chained down, unable to stand, facing a tall grey wall. They have lived this way their entire lives, never having seen the outside world, much less the sun. A fire burns on the other side of the cave and between the fire and the prisoners, a stone path winds it's way from the depths of the cavern all the way up to the surface.

Every day, people come walking through, travelling to and fro, carrying items from the surface into the cave and back; but the prisoners, with their face to the wall and unable to move or turn around, perceive only the outlines of these figures, their shadows cast starkly against the grey wall with the light emanating from the fire on the other side of the cave. 

Picture1.png

The shackled prisoners, having never properly seen their surroundings, believe these 'shadows' to be the 'real' world as the sounds and voices within the cave echo off the walls and appear to come from the shadows themselves.

One day, one of the prisoners has his shackles loosened so that he can turn around and see his surroundings. Blinded by the light emanating from the fire, he instinctively turns away, refusing to look around and see his situation for what it is. He turns back towards the wall, eyes still burning, but taking comfort in the knowledge that with his sight returning to him, he can go back to his safe, normal existence.

Suddenly and before he can gather himself, a guard pulls him away from the wall where he's sat and begins dragging him up the long winding path towards the surface. The prisoner screams and cries out to be left alone, but the guard pays no attention. He continues his long and arduous journey towards the bright light of the surface, kicking and screaming in fear as he goes. 

As he edges closer and closer to the surface, his eyes slowly adjust until they find themselves at the cave's mouth and he is thrust into the real world. He slowly stands, taking in the landscape; the green trees swaying in the wind, the blue cloudless sky and the hot yellow sun beating down upon him. The prisoner, now freed from the cave, starts to understand the nature of this 'true' reality and that he has been living all his life in an illusion, one still shared by his fellow inmates down in the cave.

f01c0731-ad59-4131-c4ff-6be992ac3a56.png

The world outside is equally beautiful as it is grotesque, but having now seen the real world he realises how naive he had been and desperately wants to free his fellow inmates. So he rushes back down into a cave that seems darker than he first remembered because his eyes are now accustomed to the light from the world outside. Though he worries he may never see the light again, he courageously pushes on, further down into the darkness below. 

By the time he reaches his old home, he can barely see at all - the darkness is so acute, even the fire in the corner doesn't give enough light for him to see well and he can no longer see the shadows on the walls as he did before. He hurries over to his friends and begins to tell them about his journey to the surface. At first, his former compatriots dismiss his ideas as outlandish and wild, they say he's lost his mind and leaving the wall has turned him insane. Convinced he must 'show' them the light by force, the first now enlightened former prisoner tries to drag the other inmates out into the light, to show them the path. But they quickly turn on him, even becoming violent. They don't want to see the light, they aren't ready for it and perhaps they never will be. For now they're happy to sit in the cave, shackled to the ground, watching the shadows dance across their dark, grey wall.

Interpretation - So what's it all about?

George-Orwell-Quotes-1.jpg

The allegory of the cave describes what Plato believes it is to be a philosopher or 'enlightened' human being. He describes the initial refusal of the truth, the blinding pain at seeing it for the first time and the difficult path that we must go on to learn and grow as human beings. Eventually, once we have expanded our consciousness with new knowledge, it's impossible to go back and live inside the cave and we naturally want to bring others into that new understanding too. 

If this idea seems a little familiar to you....then it may be because it been the basis of many blockbuster science-fiction movies such as 'The Matrix', 'Dark City' and 'The Truman Show'. In fact, all science-fiction have deeply philosophical themes at their core; the nature of reality, what does it mean to be human, what is consciousness, etc - which will be the topic of a future post, but I digress...

ww0f3u.jpg

But we have to exercise some humility here - we so often feel SO certain that we know better and that the way we see the world is the right one. It raises the question, are we the prisoner who made it to the surface already? OR are we the prisoner who's still trapped in the cave? Perhaps we only escape one cave to find ourselves in another, trapped by a new set of ideas and understanding that still doesn't represent the 'true' nature of reality (if there even is such a thing.)

We see something similar every day with the people we live and work with - we talk a lot about how important honesty is in both our personal and professional lives, but the truth isn't just uncomfortable, it can be excruciating and unbearable. Do we really want the truth? We do if we want to be better human. 

Picture1.png

This is the insight behind the 'Radical Candor' movement. Assuming for the moment (and it's a massive assumption) that our colleagues' feedback about us is accurate (and therefore synonymous with an objective 'truth'), then receiving it as often as we can is like rocket-fuel for our growth. If we can withstand enough of the truth and live outside our comfort zones for long enough (where the truth often lives and growth always happens) then we'll grow stronger and faster than those who choose not to. This is one of the core ideas of 'Growth Mindset', first conceived of by Carol Dweck at Stanford University.

Some people aren't ready for that kind of truth, they need to move to the surface more slowly and that's absolutely fine; everyone is walking their own path, at their own pace. But like learning a new language, between you and fluency lies a thousand embarrassing and painful mistakes that if you don't find a way to cope with and normalise, will drive you back down into your cave.

(Parenthesis: For an incredible guide to getting your children to better embrace their mistakes, take a look at 'The Straight-A Conspiracy' by education experts Hunter Maats and Katie Locke O'Brien.)

2cv6f2.jpg

It takes a huge amount of courage to turn and face an unpleasant truth. There's an important idea from psychology called 'cognitive dissonance' which we'll explore in another post but when we have conflicting ideas, we tend to find a way to rationalise them or ignore the more painful of the two. Plato knew how hard people will fight to maintain their world-view; when Socrates was put to death in ancient Athens, it was precisely because his lectures conflicted with accepted wisdoms and the ruling elite of the day decided it was easier to execute him than to deal with their own internal dissonant ideas. 

Plato's cave reminds us that we all live in the dark - there are always new things to discover and things that we are completely unaware of. The question is, as we live our lives, when the light beckons us forward, do we fearfully retreat? Or do we instead, step bravely away from our own shadow-walls and struggle up towards the surface where the promise of personal growth awaits.

To Sum Up

  • Plato describes the journey of a prisoner who finally learns the true nature of the reality after having been trapped inside a cave all of his life, believing that shadows were the real world
  • The prisoner tries to show his friends a way out of the cave but they don't want to leave
  • The cave is an allegory to describe how people are often afraid to face the truth of reality
  • Embracing the truth and developing self-awareness is the only way we can start to grow
  • It 'can' be a painful process, but the more painful truth we are able to withstand, the better
  • We all have our blindspots and live in the dark to some degree and so we must stay humble

pOPULAR Articles

 

Further REading

Dweck, Carol S.: 'Mindset: Changing The Way You Think To Fulfil Your Potential' (Robinson, 2012)

Festinger, Leon: 'A Theory Of Cognitive Dissonance' (Stanford University Press, 1957)

Locke O'Brien, Katie & Maats, Hunter: 'The Straight A Conspiracy' (368 Press, 2013)

Plato: 'The Republic' (Penguin, 2008)

Scott, Kim: 'Radical Candor: How to Get What You Want by Saying What You Mean' (Pan, 2018)

Syed, Matthew: 'Black Box Thinking: Marginal Gains and the Secrets of High Performance' (John Murray, 2013)

 

Philosophical Razors: The Cutting Edge in Ancient Wisdom

Written by Joss Duggan (Reading Time: 11 mins)

Razors...cutting edge wisdom? See what I did there? Putting philosophical puns to one side for a second, there's an old adage that 'there's nothing new under the sun'; that somehow, all 'new' ideas are somehow just remixed versions of wisdom that has been around for millennia. Philosophical razors are a brilliant example of this; critical thinking tools that when used correctly, just at the right moment, can be a valuable asset when sat in the boardroom or the bar.

Similar to the idea of heuristics from the world of psychology, razors work by figuratively 'cutting away' the unnecessary parts of a question and stripping it down to the essentials, so we can better understand the problem at hand. 

With that in mind, here are some of the most useful razors, with some ideas for how you can apply them yourself to your life. So with that in mind, let's start with the big one...

 

1. Occam's Razor (Keep it simple stupid!)

straight line.jpeg

By far the most famous example (and an obvious place to start), Occam's Razor is at first glance a simple truism that seems so obvious as to be unworthy of discussion. But don't be fooled! The logic and usefulness of Occam's Razor is hotly debated by the scientific community. But before getting into the nitty-gritty, here it is:

Entities should not be multiplied beyond necessity
— William of Occam
37594921_1421637961268980_9187783577062866944_n.jpg

Well.....for accuracy, what he actually wrote was "Plurality should not be posited without necessity", but another medieval scholar (a guy by the name of John Punch), decided that it just wasn't snappy enough and rewrote the phrase, which is now widely attributed to William of Occam. Thanks to Arthur Conan-Doyle, yet another version of this should be much more familiar to you: "When you eliminate the impossible, whatever remains, however improbable, must be the truth". 

In a nutshell, what Sherlock was telling us here, is that the simplest explanation for anything, is the most likely. So if confronted with two or more possible explanations, the one that satisfies all available evidence and has the fewest assumptions is most probably the right one. 

Things should be made as simple as possible, but no simpler!
— Albert Einstein

But there is the key - it must satisfy all available evidence. Just because an explanation is the simplest one available, doesn't make it the best one. If it fails to explain (or flat-out ignores) key facts, then it remains problematic. The best explanation then, is the one that fits the evidence, using the fewest assumptions

 

2. Grice's Razor (yOU kNOW WHAT i MEANT)

1345426088740_5661124.png

Our second razor comes from philosophy of language and semantics: Paul Grice was a British philosopher who spent the majority of his career at Berkeley, developing theories on how meaning and language interact, and particularly what people mean when they 'imply' meaning.

Grice's Razor is a play on Occam's, highlighting the value of simplicity (AKA 'parsimony') in interpreting meaning. I'll spare you the long version (it's pretty dense) but the short version is:

Senses are not to be multiplied beyond necessity
— Paul Grice

Grice is saying that context is king and the 'literal' version of what is being said shouldn't be taken in isolation. Let's look at a quick example:

David: Kate - Are you coming to the sprint planning meeting?

Kate: Let me just grab a coffee...

After David asks the question, in the literal sense, Kate hasn't answered the question. Now I know what you're thinking: 'Don't be pedantic, we know what she meant'. But 'how' do we know?

As a reader/listener, you 'infer' meaning from the sentence; namely that Kate is going to join the meeting immediately after she has grabbed a cup of caffeine (because presumably it's going to be a long one). Even thought there isn't a definitive 'yes' or 'no' present in her response, it's safe to assume that's she'll be along soon and we when we make these assumptions every day, and when we do, we're using Grice's Razor.

Now this is why it's so frustrating when people violate this unspoken trust. When you're 'economical with the truth' and say something that whilst 'technically' true, you know it's going to mislead the listening/reader, that's a lie of omission. Honesty isn't just about using precise speech and avoiding explicit lies, it's also about being forthright with the truth when you know that someone is expecting it. To do otherwise just wouldn't be cool would it? Come on....don't be that guy.

 

3. Hume's Razor (Evidence must equal claims)J

tumblr_mpzbnlCW2A1s72rl1o1_500.jpg

The first two precepts describe the value of parsimony; how simplicity can lead us to better answers. The following concepts, then build on those ideas towards a kind of theory of knowledge. Next up is Scottish enlightenment philosopher, David Hume:

No testimony is sufficient to establish a miracle unless that testimony be of such a kind that its falsehood would be more miraculous than the fact which it endeavours to establish
— David Hume

Ummmm...thanks Dave. Clear as mud.

An interesting (and incredibly annoying) feature of philosophy is that ancient writings are often easier to understand than more modern ones because someone has done the hard work of translating both the language and the meaning into plain english. With the medieval and enlightenment philosophers, whilst they 'technically' wrote in English, often they can be as impenetrable as Shakespeare to the uninitiated. So let's turn to the legendary American cosmologist and philosopher, Carl Sagan, to give us a simpler explanation:

Extraordinary claims require extraordinary proof
— Carl Sagan

What has now become known as 'Sagan's Standard', is a reformulation of Hume; in order to prove something incredible, the evidence must be equally incredible. For example, proving a claim such as the existence of extra-terrestrial life, would require the extraordinary levels of proof. So the literal 'awesomeness' of claims and evidence must be equal and opposite.

 

4. Hitchen's Razor (No evidence, no argument)

maxresdefault.jpg

Following on from Hume and Sagan, (your extraordinary claims need some extraordinary evidence please) is Hitchen's Razor. Irrepective of your philosophical leanings, you have to marvel at the dry, acerbic and sardonic style of Christopher Hitchens. Let it never be said that Hitch ever shied away from a good debate, taking down arguments with a barrage of rhetorical techniques, whether bona fide logical device or straight-up sophistry.

First appearing in an article for Slate in 2003, and then later in his 2007 book 'God Is Not Great: How Religion Poisons Everything', Hitchen's Razor is all about the burden of proof (or lack thereof) on the recipient of an unsubstantiated claim. Here's the man himself:

Forgotten were the elementary rules of logic, that extraordinary claims require extraordinary evidence and that what can be asserted without evidence can also be dismissed without evidence
— Christopher Hitchens

It's a variation of an old latin proverb: "Quod gratis asseritur, gratis negatur" that can be accurately translated as "that which is easily asserted is easily negated". Mr. Hitchens means to say, that if you turn-up to a debate without any empirical evidence, don't expect anyone to entertain your claims and don't be surprised when you get shut-down. It's the rhetorical equivalent of bringing a knife to a gun fight and you do yourself a disservice. Though Hitchens applied this line of thinking mainly in theistic debates, it's applicable everywhere. Watch the man in action to see for yourself.

So if you've ever sat in a meeting, listening to someone talk about their pet theories (especially the HiPPo's) or observe that decisions are being made based on opinion instead of fact, this is the moment to invoke Hitchen's Razor (perhaps more diplomatically than Hitchens himself did) and swiftly bring that discussion to a close with the following phrase:

"I think that's a really interesting point you make, what data has led you to believe that?" 

 

5. Alder's Razor (No experiment, no argument)

Screen Shot 2018-06-13 at 10.08.34.png

Mike Alder is an australian mathematician who in his now famous article for Philosophy Now magazine, claimed that his version is '...sharper and more dangerous than Occam's Razor'. Alder's Razor is much better known by it's altogether more colourful moniker: 'Newton's Flaming Laser Sword' (see above photo for incontrovertible existential proof of aforementioned item). 

Now, I know what you're thinking....and yes. A 'Flaming Laser Sword' is really just a fancy light-saber. But less important the name and more important the concept. So what is Alder's Razor?

That which cannot be settled by experiment is not worth debating
— Mike Alder

It's a philosophical debate dating back thousands of years on whether or not 'pure reason' alone can solve the mysteries of the universe with intellectual giants of the field on both sides of the argument. Whatever your opinion, Alder's Razor is a useful little tool for moving forward when you get bogged down in conference room debates. 

If you've read The Lean Start-Up by Eric Ries (and if you've haven't, you definitely should, it's a game-changer!) you'll understand how well the scientific method can be applied to organisations of all shapes and sizes, from start-ups to multi-national corporations and everything in between. The main idea, is to get all your assumptions out on the table and systematically work through them all to find out if your hypothesese are correct or not. This produces what Ries calls 'validated learning', which is what all young ventures should be focused on.

Now you may have spotted a problem with the application of Alder's Razor. There are many areas which it's either incredibly hard to run experiments (politics) or completely impossible (religion). Add to that, it basically kills off more than 50% of the entire philosophical canon of literature. So use with EXTREME CAUTION - let's not accidentally kill off the field completely, philosophers struggle to find jobs as it is.

 

6. Hanlon's Razor (People aren't evil...just stupid)

facepalm.jpg

Whilst described as a computer programmer from New Jersey, something of a mystery surrounds the true identity of Robert J. Hanlon. When Alfred Bloch was compiling a book of funny philosophical musings in 1980, he received the following submission that has since become known as Hanlon's Razor. Whilst the essence of the aphorism has appeared in the writings of David Hume, William James and Richard Feynmann, Hanlon's version remains the one that's stuck:

Never attribute to malice, that which can be adequately explained by stupidity
— Robert J. Hanlon

When things go wrong, we seem to have a cognitive bias towards ascribing wrong-doing and 'evil' intent; when someone is late to a meeting we've called...they're disresepecting us on purpose. When the kids a few rows in front at the movies are talking, they're doing it annoy us! 

In fact, this is very closely related to a well-known and substantiated cognitive bias in psychology called the actor-observer bias, wherein, whenever 'we' make a mistake, we blame temporary, external influences (e.g., the traffic made me late). Whereas when someone else makes the exact same mistake, we attribute the infraction to an internal, permanent characteristic of that person (e.g., that guy is lazy and disrespectful).

Those most susceptible to this line of thinking are those suffering from narcissistic personality disorder, as they see everyone else's actions only in reference to themselves. So if your boss, or someone you know constantly flies off the handle, blames people for their maliciousness, or even calling people 'evil'.... you may, in fact, have a narcissist on your hands.

No-one is the villain in their own story
— George R.R. Martin

Whether consciously or not, we all think of ourselves as the heroic protagonist in the sweeping, epic tales of our own lives - though we may well be the villain in someone else's.

So we must give people the benefit of the doubt; if someone has screwed up (or worse) really hurt us, it's probably not personal or intentional - they probably just didn't think it through. Everyone is doing the best with that they have - includes both IQ and EQ. So next time it feels like someone is out to get you, use the empathy that you only wish they had been able to exercise themselves.

Caveat: Never rule out the possibility of both stupidity and malice in combination, it does happen...

trump_stupid.jpg

 

To Sum up

  • Occam - All things being equal, simple answers are better as they have less assumptions

  • Grice - Honesty is as much about what you don't say as what you do say

  • Hume - All claims need equally substantial evidence (quali or quanti) to back them up

  • Hitchens - If you don't have any evidence, then we don't need to have a debate

  • Alder - If you can't go and get evidence by running an experiment, well...refer to Mr Hitchens

  • Hanlon - Be patient with people (especially those without evidence) they're not (that) evil

pOPULAR articles

 

Further Reading

Mike Alder: 'Newton's Flaming Laser Sword' (2004)

Bloch, Arthur: 'Murphy’s Law Book Two: More Reasons Why Things Go Wrong' (1980)

Hyman Arthur & Walsh, James J.; 'Philosophy in the Middle Ages' (1973)

Carroll, Robert Todd: 'Occam's Razor'  (2014)

Grice, Paul: 'Studies in the Ways of Words' (1989)

McAleer, Michael: 'Simplicity, Inference and Modeling: Keeping it Sophisticatedly Simple(2002)

Ries, Eric: 'The Lean Start-Up' (2011)

Sober, Elliot: 'What is the Problem of Simplicity?(2004)

Sober, Elliot: 'Occam's Razor: A User's Manual' (2015)

Stone, John R.: 'The Routledge Dictionary of Latin Quotations' (2005)

Thorburn, W. M., 'The Myth of Occam's Razor' (1918)