Discover more from The Hazardous Times
Parasitic Language Games
maintaining ambiguity to hide conflict while burning the commons
“They are playing a game. They are playing at not playing a game. If I show them I see they are, I shall break the rules and they will punish me. I must play their game, of not seeing I see the game”
- R. D. Laing
"It's not lying if everyone knows it's lying."
I see this sentiment in a lot of places. It pops up in corporate managerial contexts. It's been used as a legal defense and worked. It's a claim that communication that looks adversarial isn't, it's just high-context communication between people "in the know", there's no deception happening, no conflict, you just don't get how we do things here.
I don't buy it. My claim in a nutshell:
It situations where people insist "it's not lying because everyone knows it's lying" the people in the know aren't deceiving each other, but the reason this game is being played is to fool people not in the know, and insisting that it's just "high context communication" is part of an effort to obscure the fact that a conflict is going on.
If that makes perfect sense to you, dope, you already get my main point. The rest of this post is adding nuance, actually arguing the case, and providing more language for talking about these sorts of dynamics.
Case Study: "Are Founders Allowed to Lie?"
This essay by Alex Danco talks about how "it's not lying because everybody knows it's lying" works in the Silicon Valley startup scene. It's short enough that it's worth reading now so you can decide for yourself if I'm misrepresenting him. If you don't feel like reading it I still quote enough of it for my post to make sense.
It's really hard to start a business without lying:
If you are only allowed to tell the literal, complete truth, and you’re compelled to tell that truth at all times, it is very difficult to create something out of nothing. You probably don’t call it “lying”, but founders have to will an unlikely future into existence. To build confidence in everyone around you – investors, customers, employees, partners – sometimes you have to paint a picture of how unstoppable you are, or how your duct tape and Mechanical Turk tech stack is scaling beautifully, or tell a few “pre-truths” about your progress. Hey, it will be true, we’re almost there, let’s just say it’s done, it will be soon enough.
It's not lying because everyone's in on it.
You’re not misleading investors; your investors get it: they’re optimizing for authenticity over ‘fact-fulness’. It’s not fraud. It’s just jump starting a battery, that’s all.
Some abstracted examples of what this "pre-truth" looks like:
You’ve all seen this. It doesn’t look like much; the overly optimistic promises, the “our tech is scaling nicely” head fakes, the logo pages of enterprise customers (whose actual contract status might be somewhat questionable), maybe some slightly fudged licenses to sell insurance in the state of California. It’s not so different from Gates and Allen starting Microsoft with a bit of misdirection. It comes true in time; by the next round, for sure.
Why it's important and also why you can't talk about it:
Founders will present you with something pre-true, under the total insistence that it’s really true; and in exchange, everyone around them will experience the genuine emotion necessary to make the project real. Neither party acknowledges the bargain, or else the magic is ruined.
Before investigating if Danco's story checks out I'm going to introduce some frames for talking about communication to make it easier for me to clarify what's going on here.
Context & Language Games
All communication relies on context and context has a nested structure which operates on multiple levels of communication. Some context operates piece-wise at the level of words and phrases; "me", "now", and "this" all offload their meaning to the surrounding context. Besides resolving the referents of particular words and phrases that are ambiguous without context, there’s a broader way in which all communication is contextual to your understanding the world, your interlocutor, and what you think you’re both trying to do by saying words.
There’s a sort of figure-ground dynamic where the shared understanding of the pre-linguistic “Situation” is the ground that allows explicit communication to even be a figure. The explicit components of communication provides a set of structured references that pare down possible things you could be talking about and doing via talking about them, and it’s the ground, the Situation, that allows your utterances to “snap-to-grid” and form a coherent unit of communication that another person can interact with. Without the “so what?”, no communication really makes sense. The only reason this isn’t always obvious is because the “so what?” can be so simple, general, or obvious that it barely feels worth mentioning. If I shout “Fire!” the “so what?” is so patently obvious to any person who’s flammable, the snap-to-grid happens so instantaneously, that it barely feels like a “step” in the process of understanding. Non-sequiturs can only be a thing because people are always attending to “what is The Situation and what are we both doing here?” in a very general sense.
"Situations" can vary from incredibly stereotyped and well trodden like "gas station clerk and customer", to idiosyncratic and bespoke like the communication between a couple that’s been married for 50 years. In addition to the relationship aspects of context connect, people have lots of shared "modes" of context. Story-telling, jokes, sarcasm, earnestly explaining your beliefs, these are all high level contextual modes that have different answers to “what are we doing with words here?”
I use the term Language Game to gesture at the whole package deal process a person is using for meaning making at any given moment. The term comes from Wittgenstein who introduced it largely to shed light on the vast diversity of things people do when communicating. This pushed back against his philosophical peers, people like the Logical Positivists and Bertrand Russel, who claimed that the basis of language was asserting propositions. Wittgenstein claimed that things like:
Giving orders, and obeying them
Describing the appearance of an object, or giving its measurements
Constructing an object from a description (a drawing)
Reporting an event
Speculating about an event
Forming and testing a hypothesis
Presenting the results of an experiment in tables and diagrams
Making up a story; and reading it
Making a joke; telling it
Solving a problem in practical arithmetic
Translating from one language into another
Asking, thanking, cursing, greeting, praying.
are all very different uses of language, uses which incorporate propositions but can’t be accounted for in terms of propositions alone. When I talk about language games I’m also trying to highlight the interlocking nested layers of context that language is processed through in order for an utterance to be "snapped-to-grid" and mean something to you.
In plenty of cases people can notice the discrepancies between the language games they're playing and sync up to create a more coordinated understanding of The Situation. In plenty of cases this doesn't happen easily. Weird stuff happens when people don't understand or don't care that they're using the same words to play very different language games.
All that being said, here's how I'd phrase Danco's claim in my own language:
Instead of playing the "truth-claim" language game, where one interprets others as making actual truth claims about the way things are and when you speak you expect others to interpret you similarly, they are synced up on playing the pre-truth language game. In this game it's understood that when a founder tells a VC that they've got a demo ready right now, the VCs knows that a demo may or may not actually be ready and that they aren’t expected to believe it definitely is. The utterance was made under the shared context that the specifics are less important than enacting a role, conveying "authenticity", and "evoking in people the genuine emotion necessary to make the project real". Pre-truths aren't part of an effort to deceive, but are a way to engage in a social ritual.
I think Danco describes a lot of the pre-truth game pretty well. Where I disagree is when it comes to the effects of the pre-truth game and the function it serves in the startup ecosystem. Specifically, he claims that 1) no one's getting deceived / there's no conflict happening / this is a cooperative game for all involved and 2) the pre-truth game provides positive sum benefits to the ecosystem that have nothing to do with deception. But before arguing those points I’m going to contrast the pre-truth game with a similar looking situation to further highlight how it’s supposed to work.
Imagine an environment where everyone is playing the truth-claim game but there's a high frequency of liars. Remember, the truth-claim game is characterized not by everyone being honest, but by how people expect to be interpreted. We'll call this a low-trust truth-claim situation. Here, if a VC has a way to verify the claims of a founder or is plugged into a functional reputation network they can make decisions about the honesty of their interlocutor. When they don't have that, VC's will likely resort to using base-rates about lying in the ecosystem and "round down" the claims of founders when making a guess at what's the case.
This is not the situation Danco is describing. The pre-truth game is a high trust cooperative dynamic among those playing it, where founders "rounding up" and VCs "rounding down" happen as a matter of convention. Keep that in mind as we poke at the rest of his story.
Who benefits from pre-truth?
The pre-truth game is cooperative among those in the know who are playing it, but what about everyone else? Danco makes clear that not everyone is in on it. The rules and limits of the pre-truth game are nebulous and implicit, part of a "nudge-wink fraternal understanding", and even the existence of the game is taboo to talk about. People outside Silicon Valley aren’t synced up on playing pre-truth with founders. He also notes there being plenty of founders who don't pre-truth, not just because of principled reasons, but because they don't have social access to "the rules". Given all the obfuscation and the nature of the pre-truth game I'm very confident that not everybody knows, and there's going to be plenty of collisions between pre-truthers and truth-claimers. When these mismatches occur the mismatch will "fail silently" and not be immediately obvious to either party.
In these mismatches, are the truth-claimers being lied to? A better question, "when these mismatches in language games occur, who reliably benefits?" Since pre-truthing founders are always "rounding up", truth-claimers will see the investment opportunity as better than it actually is. It will be situational as to whether or not this would counterfactually change the investors decision (you could decide not to invest even with the rounded-up assessment, and both the rounded-up and actual assessment could indicate good bets to you), but either way they’ve been reliably misinformed. The pre-truthers frequently get more capital than they would have otherwise and the truth-claimers frequently make less informed decisions than they would have otherwise. The directional gains from this are clear.
A defender of pre-truth might say:
"Okay fine, there are negative externalities, but those are accidental! Sure we benefit when someone mistakes pre-truthing for truth-claiming, but that's not why we pre-truth. It's a crucial dynamic that has this 'je ne sais quoi' which helps both founders and VC's in a non-extractive, non-deceptive way. The negative externalities are lamentable, but it's a trade-off to get the secret sauce, and this only works if we don't talk about it so there's not really a way to get rid of the negative externalities."
The emphasis on how important it is to not talk about the game is very suspicious to me. It's clear why the existence of the pre-truthing game needs to be obscured if it's primarily an extractive play; potential marks need to not know about the game in order to be fooled. It's a lot less clear what sort of non-extractive secret sauce this game could have that requires it to not be talked about openly.
Remember, for this game to not be deceptive between VC's and founders it needs to be the case that when founders claim to have made certain progress, the VC's know that there's a decent spread on what the actual situation could be. And if they're still deciding to invest, that means their risk-profile is okay with that spread of possibilities. Which means if they'd both been operating from a truth-claim orientation, the founder could have just told them how things are and nothing would have been lost.
"But since most people are already playing the pre-truth game, if you don't 'round up' then VC's will 'round down' your pitch and your startup will look less promising than it is! You have to pre-truth just to maintain equal footing."
This sounds like a benefit of the pre-truth game, but it's not. This is claiming that the pre-truth game is purely an equilibrium point in a pure coordination game, and that you'll be misunderstood if you don't play. If the merits of syncing to the pre-truth game were just that it was a prevailing coordination point, people should have no preference for the pre-truth game over the truth-claim game except for the costs of switching. And this isn't the sort of coordination equilibrium that's hard to switch out of. Consider a canonically difficult coordination problem; getting all your friends to switch to a different social media platform. Different platforms have different merits, and the biggest benefit of a platform is the network effects. If you hate twitter and want to switch, you could make a platform that's in many ways superior but it won't get anyone more of what they want until a critical mass switches to the new coordination point.
Switching from the pre-truth game, which we've identified has negative externalities, to the truth-claim game has none of the qualities that make for hard coordination problems. There's no network effects; the only people you need to coordinate with for an interaction to work are the people you're currently talking to. This also isn't a schelling point like problem where you have to pick the same coordination point without talking to each other and hope you picked the same thing. You can simply clarify with each other what language game y'all are gonna be playing. And remember, VC's aren't rounding down because it's a low-trust truth claim environment, they're rounding down as part of a cooperative understanding, which means you don't have to deal with the difficulty of reliably communicating honesty. In fact, the only thing that makes it hard to switch out of the pre-truth game equilibrium is the taboo about acknowledging the game. The taboo is an active force that both props up the negative externalities of the game and props up the "you take a hit if you don't play" incentive.
Hyperstitioning Coordination for Stag Hunts
The last vaguely plausible defense I can imagine for the pre-truth game:
"Fine, I'll spill the beans on the secret sauce. We do it as part of a stag hunt coordination strategy. For a startup to succeed, you don't just need to convince people you have a good idea, you need to convince them that others will buy in as well."
This is the only angle that can give even a little bit of an explanation for all the obfuscation. It... almost makes sense. Something like the pre-truth game is by no means the only or best strategy for coordinating people in a stag hunt, but it can get the job done. This is what I think Danco is gesturing at when he talks about getting everyone to "experience the genuine emotion necessary to make the project real".
I do agree that there's a non-trivial stag-hunt aspect to the task of making a startup. A huge part of convincing any given person to participate, whether it's first hires or investors, is convincing them that you can convince others. As an employee I could think the idea is great but you'd need a lot of capital, and if I don't expect you to be able to raise enough money I won't join. As a VC, I could believe in your business but if I'm not willing or able to fund you the whole way through, funding you now means making a bet that you can convince others to fund you down the road.
The problem is that there's way more that goes into starting a successful company than just getting people to believe in you. The glowing cloud of endorsement is needed in addition to the tech being possible, having the talent, having a good product, the market conditions being right, and basically everything else about making a business work. For someone to think it’s a good idea to join you, they need to be convinced that all of those details check out. How are you going to legitimately convince people of that? The pre-truth game doesn't come with any way of clarifying which parts of your pitch are coordinative pre-truth hype and which parts are actual real aspects of your plan that are important for others to understand. The nature of the game and the obfuscation around it works against such clarification. It actively degrades everyone's ability to clearly communicate about the very real underlying reality which someone needs to be keeping track of for this whole thing to work.
This whole game actively makes it easier for grifters to succeed in your ecosystem. Sure, maybe you happen to be the heart-of-gold-god-tier-competence-noble-lie-champion that can make pre-truth work in a way that benefits everybody, but you did it at the cost of reinforcing a game that breaks the error correction capacity of your community. The taboo on talking about the game pushes against people openly trying to keep track of who’s playing which language game and sharing that information, and if you try to update the mainline reputation network on people’s honest you’ll face resistance because “it’s not lying, stop trying to ruin people’s reputation!” It’s not impossible, but it becomes much harder. All this means grifters can flood in with little resistance. You get all these negative consequences, and it's not even the case that pre-truth is the only way to get coordinated group buy-in!
The pre-truth game combined with the taboo on talking about it produce minimal positive gains, ones that can be achieved by other means, while creating a context that will reliably extract from those not in the know, while also gumming up the error-correction capacity of the ecosystem in a way that will lead to increasingly extractive behavior over time. In other words, it's fucked.
Generalizing: Parasitic Language Games
I'm not particularly entangled with the SF startup scene, so while I care about trying to set the record straight about what's going on there as a matter of principle, I mostly care about it as a way to illustrate the general dynamic.
I call the pre-truth game a parasitic language game with respect to the host language game of truth-claims. Its existence is powered by gains it extracts from those who mistake it for the host. Sometimes these gains are had through straightforward deception, but they can also be had when obfuscating the playing field creates enough plausible deniability that third parties can be prevented from intervening on the underlying conflict.
A parasitic language game does damage to the host language game, and to the entire discursive ecosystem it inhabits. Those playing the host game who are more trusting wind up the marks who are deceived and extracted from. Those playing the host game who catch on to the mismatch between what is said and what is functionally find themselves to be in a low-trust environment that for mysterious reasons is fighting against the typical verification and reputation network methods that can ease the burden of having to shift through the untrustworthy. When players of the host game try to confront parasitic players they're first be met with insistence that they are in fact playing the host game and express offense at being attacked. If pressed further, the parasitic player will admit to playing to playing a different game and quickly pivot to lines like "that's just how things are done here..." and "don't be so naive, everybody knows..." which serve the dual purposes of trying to avoid reputational blow-back among host players and trying to instill the taboo about open communication in the interrogator so that they can keep pretending the parasitic game doesn't exist.
Importantly, engaging in a parasitic language game is part of a coalitional strategy. If you're pre-truthing all on your own, you're simply a liar. You need a sufficient amount of people playing the game together in order to have the cover of "things work differently here, you just don't get it." I don't have well formed thoughts on how parasitic language games can come to dominate a given ecosystem but I'd guess it's somewhat related to what Ben Hoffman talks about in his "Guilt, Shame, and Depravity" post.
While many biological parasites have control systems to create a "controlled burn" that ensures the host stays alive long enough for the parasite’s needs, parasitic language games don't have similar steering mechanisms. The constraint of operating strictly in the implicit impose a huge loss of organizational capacity. If the players in the know are also engaged in overt conspiracy, frequently talking clearly with each other to keep track of the underlying reality while staying out of sight of those not in the know, they'd retain a lot of their ability to strategize and steer the situation. But it seems like parasitic strategies are most often powered by compartmentalization, motivated ignorance, and self-deception. Such parasitic language games have a short life span if the players are harming their collective ability to keep track of and communicate clearly about the underlying reality of their situation. One way a parasitic language game can survive longer is if it reaches the point of becoming a "too big to fail" dynamic.
A closing thought: the communication distortions I’ve been exploring are ones that are actively being maintained, both as a way to hide the presence of a conflict and as a tool to engage in said conflict. They’re very different from the conflict-language entanglements where problems in communication generate a conflict, one that could be resolved if only people could work through their mutual misunderstandings. When it comes to parasitic language games, the typical pro-social "resolve misunderstandings and ambiguity" tool-box isn't going to help because the misunderstandings and ambiguity are serving a purpose.
I’d like to be able to share thoughts on combating these dynamics, but I don’t have too many. At a general level, acting more directly on the underlying conflict instead of on the communicative symptoms seems like a start. Just being aware at all of the games people play is useful for staying oriented and not getting mystified by all the obfuscation. If anyone has more worked out strategies I’d love to hear them.