M

Mo Bitar

Notes to self. Working on Standard Notes, an end-to-end encrypted notes app.

Techno conservatism

It’s bleak and rainy outside. I woke up earlier than usual this morning, and even before I saw what it looked like outside, my insides matched. So it’s the perfect day to write a rage piece against the bewildering behavior of what I can only describe as techno-conservatism, whose followers seem to absolutely loathe any sort of movement or innovation in the space. Have you seen the comment threads in Hacker News on articles about Signal’s new crypto payments feature? Every single one of them a lambast. Common phrases include scam, pump and dump, no one asked for this, why not use Stripe, why, and an endless barrage of linguistically creative of ways to block the movement of a product towards any particular future.

This isn’t an isolated incident. Perhaps crypto is a heated topic, but almost any sort of groundbreaking technology or innovation in a fast-moving space receives the same treatment. I mention HN because their commenters are usually the most rational. So if on HN comments have devolved into reddit quality, then I fear looking at what’s become of reddit.

But let me try not raging against their rage and instead interpret events from their vantage. At this point I’ve come to understand there is no way this is about the particulars. No matter the topic, you will see the same breed of comments and commenters. And the dissent is always louder than the support—people in favor of, say, cryptocurrencies will be a lot less violent about their support, than dissenters about their condemnation. The single safest thing you could do if you support something controversial is probably keep to yourself. So rationalists are overrun on comment threads, and techno-conservatists thrive.

I think rather than focusing on the particulars, we can break this down into something much simpler: there are two camps of people. Those who believe the world is progressing towards something worse. And those who believe the world is progressing towards something better. Those in the “worse” camp will likely see any event in any space as a sign of the impending doom, and attack it mercilessly like a runaway immune system. And those in the “better” camp can see any event as a sign of the positive future to come.

Crypto is an excellent divider, slicing these groups sharply in the middle. On the impending doom side, crypto is a sign of energy waste, get rich quick schemes, techno-elitism, scams, and a thousand other loosely related consequences. At this point techno-conservatists have gotten so good at rational gymnastics and linguistics that crypto can be linked to almost any major issue. On the better future side, crypto is a sign of financial liberation, decreased power of government, decentralization of currency, and a thousand other tightly related consequences. Techno-progressivists have also gotten so good at the language game that almost any issue can seemingly be solved with crypto—which you believe to be true, as I probably do, if you're on the bright side.

So it’s made me feel a little better to understand that the event absolutely does not matter. It could be Signal adding crypto payments or it could be Facebook creating a cryptocurrency or it could be most anything of the format “X company does Y crypto,” and you will immediately trigger the two camps in their respective manner. The techno-conservatists will put on their thinking glasses and write a compelling thesis on why this move will likely only inch us one step closer towards doom, and the techno-progressivists will, in lower fearful quantities, write their thesis on why this move should be applauded and how it brings us one step closer towards a brighter future.

The techno-conservatist knee-jerk reaction to any innovation they don’t understand or is too sudden and abrupt, and that perhaps other people are getting rich off is, who needs this? Why this thing and not this other preexisting thing? Can we slow down a bit? I mean we’re ignoring all these other million factors. People still don’t have clean drinking water and you want to write more crypto code? In essence: Can we just keep everything as-is for the next 1000 years, because I’m sort of worn out keeping up with all this stuff.

The techno-progressivist knee-jerk reaction to any innovation they don’t understand and others are getting rich off is likely: what’s wrong with me? Why have I overlooked this? Damn, there are people smarter than me who are on top of these things while I’m here watching TikTok? Wait, Moxie, the genius cryptographer behind Signal's and WhatsApp’s encryption is working on this? What a goddamned legend. I’m an absolute idiot for not understanding this or looking into it sooner.

Funnily enough, there’s actually a mathematical way to measure just how idiotic you are. It’s called the price of Bitcoin. If you refuse to touch crypto with a twelve foot pole, you are infinitely idiotic, otherwise your level of idiocy is measured by how high a price you paid for being late. I say this mostly humorously and self-reflectively. In some technologies I am indeed an idiot and have looked into them far later than others. But I suppose that’s key in the distinction between techno-conservatists and techno-progressivists: allowing yourself to be ok with being an idiot. I mean likely you are. There’s no way any one person is not infinitely idiotic with regards to anything they’re not paying attention to. Forgive yourself, accept yourself, and yield to others’ less relative idiocy in a space.

Just yield, man.

How does Naval speak so eloquently?

Have you ever heard Naval speak? He’s been on various podcasts, like Joe Rogan’s and Tim Ferris’. He oozes eloquence. Every sentence he speaks is brand new. Every analogy and metaphor a drop of revelation. I’m not sure if prophets are still made today in the post-Information Age, but he’s one for the ages. It’s not that he’ll just drop one-off quotables during the course of an interview. No—every sentence he speaks is something that twists your mind. Wow, you think—I didn’t know you could do that with the English language, with such few words.

How does he do it?

This topic intrigues me because the topic of prophets as a whole is fascinating. How do normal men in the course of history become superimposed on the human timeline as to be mistaken of extra-terrestrial origin? There are some religious texts—likely all of them—that are pure literary gold. What enables these authors to compose beyond the creative threshold of the time?

What enables Naval to speak more eloquently than others?

Here’s what I think: I think he makes it up as he goes. I think he has no idea what he’s about to say until he says it. Most of what he says is spontaneous and likely not even something he’s heard himself say before. He’s just as surprised and impressed with himself when he speaks as you and I are.

I think it’s the medium that unlocks something special in him. I don’t think Naval could write an essay, for example, as profoundly as he can give an interview. I don’t think he can sing or write a song as profoundly as he speaks. I don’t think he can give as profound a TED talk as he can a profound open-ended interview. I think the medium unlocks something special in him that he himself did not know existed in such packaged and consistent form until such interviews began to occur.

I have a friend who on the phone and during the course of normal spontaneous conversation will speak such profound utterances in such simple ways that I tell him you simply must record yourself speak or publish your works, or something! If the world heard what you're saying, they’d melt for more. The funny thing is, whenever he goes to transcribe this profundity to other platforms, it falls apart. He doesn’t come off as smooth. It doesn’t sound the same when written out, or sung out, or podcasted out. Nope. It only works if it’s on a phone call, and it’s spontaneous, and non-recorded. This is the random mutation that my friend possess, and it’s non-transferrable, and non cross-platform.

I think yet others have other random mutations that allow them to thrive in certain creative environments beyond the threshold. Great singers or songwriters can express themselves more passionately in a song than in an essay or interview. Great writers can express themselves more lucidly in a novel or poem than in a speech. Great artists can provoke thought in a painting or sculpture more than they can in a conversation. Great speakers and politicians deliver more impactful orations in a monologue than via song. Great playwrights and movie directors show a more vivid tale with the lights on than off.

What then is the source of greatness in the works of singers, writers, speakers, and artists? How does an artist paint something exquisite, or a singer compose something beautiful, or a writer write something profound? They simply begin painting, composing, writing, or singing, and their random tint does the rest (and of course years and years of compounding wisdom and experience).

So, how does Naval speak so eloquently? He just begins speaking.

Rarity is extremely uncommon

With all the perpetual hype around cryptocurrencies and recent hype around non-fungible tokens, it can be easy to forget just how uncommon rarity is. Try this exercise:

Look around you, or outside your window, and point to any object and ask, “is this rare?” The answer will almost certainly 100% be “No.

The tree outside my house — not rare.
The bushes by the trees — unique, but not rare.
The brick my house is made of — not rare.
The gravel on the road — not rare.
The ceramic my coffee cup is made of — not rare.
The lightbulbs embedded into my ceiling — not rare.
The chair I’m sitting on — not rare.

In fact, I’d challenge you to find a single rare item in or outside your home. Chances are if you do find such an item (you probably won’t), it would likely either be gold, jewelry, or some explicitly collectable item.

Rarity is extremely, extremely uncommon on Earth. Everything is so easy to replicate and reproduce.

So it shouldn’t be too difficult to rationalize that when something rare is found, and it is certifiably rare, the human instinct is to harbor it. Imagine walking the mountain path for miles and seeing all the same trees, animals, bushes, leaves, dirt, branches, rocks—but then suddenly your eyes alight on this shimmering yellow rock type element that you’d never seen before. Would you not stop to pick it up and lust over its exquisite uniqueness? You’d carefully stash it in your hide satchel and take it back to your tribe. Perhaps foolishly you'd hold it up in the air and exclaim, Look what I found! Likewise enchanted, ooo’s and aah’s emanate as a crowd forms around you, everyone reaching up trying to get a piece, or—if they’re lucky—a closer look. Eventually someone with more foxskin than you decides they simply must have that item, if for no other reason than because everyone else is likewise swooning over it, and makes you an offer you can’t refuse. Thus is born the value of gold. Gold is truly rare. And gold is extremely uncommon.

By now we’ve hopefully established that rarity is extremely uncommon. We've simply defined a word. Here comes the boss level:

In the physical world it is somewhat labor intensive to reproduce artifacts. Yet even given the relative difficulty of reproducing physical items, rarity remains extremely uncommon. In the digital world, it is virtually cost-less to reproduce artifacts. So if in the physical world there is no abundance of rarity, the digital world is nothing if not infinite copies perpetually propagated through the ether. In the digital world, where artifacts are comprised of commoditized bits and bytes, rarity is by definition impossible.

So how shall we react when we are told the news that rarity is now possible in the digital realm, via making the reproduction of digital artifacts so expensive, that it is by the laws of physics nearly, if not totally, impossible? If I produce a 1 of 1 digital artifact and I say—nay, prove—that it would require more energy than is available to most nation states to recreate this digital artifact, should this exquisitely rare item not have value? The answer is simply, non-controversially yes. The more difficult question is, what should be its value? The answer to that is still somewhat simple: the most someone is willing to pay for it.

Speaking less abstractly, a digital art piece should be valued the same way a physical art piece is valued. There is no need to struggle with “I can’t believe someone paid millions of dollars for a JPEG!”—NFTs do not invent the art market. They simple translate it to the digital realm. If ever you find yourself struggling with the valuation of NFTs, simply translate your qualms to the physical realm (“I can’t believe someone paid millions of dollars for oil on a canvas!”) and quickly end your befuddlement.

Bitcoin of course is the aboriginal rare digital item. Should Bitcoin have value? Well, let’s hash this one out real quick:

Is Bitcoin rare? Yes.
Is it virtually impossible to reproduce a Bitcoin? Yes.
Do a lot of people want Bitcoins? Yes.
Do a lot of people agree Bitcoins should have value? Yes.

So, it shouldn’t be hard to imagine why someone would want to pay tens of thousands of dollars for—yes—bits and bytes. Why remain so befuddled by this concept? Rarity is platform agnostic. It does not matter if it occurs in the physical realm, the mental realm (ideas, poetry, literature), or the digital realm. Rarity is extremely uncommon, and thus extremely valuable.

The Kids Choose

If you haven’t been following lately, there’s a newly relevant form of digital scarcity called NFTs that are selling for thousands of dollars, sometimes even hundreds of thousands of dollars. NFTs are rare collectibles, whether they be digital artworks, music, memes, or domains. In most cases NFTs are just a smart contract application built on top of Ethereum, where each collectible series is its own smart contract. Hashmasks are one example. There are a total of 16,384 unique digital items, and each item is represented as an address on the Ethereum blockchain.

Onlookers are incredulous at the concept: how can infinitely duplicatable RGB pixels on a screen have value? Why would I pay thousands of dollars for a digital artwork that anyone can take a screenshot of and stare at locally for hours on end? This concept will never work, they say. NFTs are totally ridiculous, and should not have any value.

Yet it’s remarkable how soon we forget the arbitrary nature of…just about everything on this damned planet. Do you think it obvious that gold should have value, or physical paintings, or dollar bills, or Pokemon cards? These narratives were at one point invented. And some group of zealots were tasked with convincing everyone else that this should be worth something. When enough people were in agreement that these things had value, then at that point, those things become valuable, and remained perpetually valuable so long as the narrative retained enough believers.

The question is not “do NFTs have value?”, but rather, can enough people be in agreement about the value of NFTs as to create a liquid market? The answer of course is yes. Humans are nothing if not malleable in their beliefs. If you thought reality was grounded in any sort of objective nature, go talk to the billions of people that believe bearded men in the past could walk on water, or part the seas with a cane, or split the moon in half and put it back together. To these people, these beliefs are far more obvious than any scientific fact you can lob at them.

So who choses what has value and what doesn’t? The kids do. Today a bunch of gray-hairs in suits dictate what should and shouldn't have value. Yet if you can’t convince a generation of kids to buy gold, and they instead want to use their hard-earned money to buy digital art, where do you think the value goes over time when the kids inherit the globe? If you can’t convince the kids that the dollar is a safe store of value, and they instead want to use their hard-earned money to buy digital currency, where do you think the value goes over time?

Value is invented. It is decided. It’s not obvious. And it’s not exclusive. The common attribute amongst all stores of value is their relative rarity. It can be a piece of metal, it can be a piece of paper, or it can be a dildo. Ultimately, it’s the story behind the pixels/material/paper that contains the value, and not their physical characteristics. Not all digital art should have value. But if [some exceedingly famous person] minted a unique art piece and etched it on an Ethereum block, and sold that story to the public, do you not think this should have value? In fact, getting an exceedingly famous person’s autograph on a piece of toilet paper would instantly make the toilet paper valuable. How much would you pay for Da Vinci’s poop stain? You, maybe not much. But I guarantee you there is a market of Da Vinci aficionados that would pay millions of dollars for it, and its value would only go up over time. It’s the story.

I’ll even take this regrettable analogy one step further: if it was determined that Da Vanci’s poop occurred on the day he finished the Mona Lisa, the Poop Paper’s value would instantly rise from ten million dollars to a hundred million dollars.

It’s the story.

The Bitcoin Story

According to Hacker News, Bitcoin has many problems, and therefore, is not merit-worthy:

  • “Transactions are slow and expensive"
  • "It lacks a lot of the controls that traditional banks have for good reasons, so fraud becomes harder to tackle”
  • “I just wanted internet money, not a speculative financial instrument.”
  • "This volatility is why it will never be a useful currency.”

By this same logic, email should also not have succeeded:

  • Email is slow, heavy, and uses largely outdated technologies
  • It’s nearly impossible to make email private/encrypted
  • Email, as software, is largely impossible to make incremental improvements on due to its sheer decentralization

But email’s days are numbered, yeah? Any day now.

Perhaps by this same logic, the English language shouldn’t succeed either:

  • There/their/they’re is a UX nightmare that will inhibit adoption
  • The gh in laugh makes an f sound—give me a break. But sometimes it also makes an oo sound, as in through. But also it can make an oh sound, as in though. Good luck scaling that.
  • English is riddled with homographs and contronyms that will confuse even English professors

Will the startup that disrupts the English language come from California, or will it be Texas?

If the pattern has not yet been made obvious, networked technologies like Bitcoin, email, and the English language are not valued by their feature set and design—they’re valued by the number of nodes that speak that same language. The most useful language is the one spoken by the most number of people. The most useful communication technology is the one accessible by the most number of people. The most important currency is the one believed in by the most number of people.

The philosophical arguments against Bitcoin end up being precisely why it is so valuable:

  • “Bitcoin isn’t even the best cryptocurrency. It was just first.” Yup, exactly. But like, a Big Bang of a first, am I right? This is important.
  • “Bitcoin is just a pyramid scheme that requires new believers to make previous believers’ holdings more practical. It has no intrinsic value.” Yup, it’s a networked technology. The more people you can get to speak your English-disrupting language, the more valuable it becomes. Without belief, without adoption, there is no value or utility.
  • “Bitcoin is just a cyberpunk fantasy about a future where Bitcoin will matter.” Yeah, but it’s a hell of a story, right? If you think this story is compelling, check out Christianity’s stories, or Islam’s stories, or the United States national story. Now those have suckered in quite a few. What would you put Christianity’s market cap at?

If you continue judging stories like Bitcoin by their technical merits, you will perpetually blind yourself to their importance, value, and potential. When you instead judge networked technologies by their narrative, ubiquity, trust, and ultimately, decentralization, you might begin to understand what a $1 trillion dollar story looks like.

On the Epic side of history

Imagine a natural road spontaneously forms between point A and point B, and that as a consequence of this road, individuals suddenly wake up to the importance of point B, and of traveling there. Companies had first ignored point B altogether, but because the overwhelming majority of individuals now travel this road, these companies must now begin meeting individuals where they are: at point B. If they don’t, they will perish.

But then comes along a wonderful invention: a road between point A and point B, but built on Conveyer Belt technology by the iCompany. Anyone traveling on the iRoad will arrive to B in 1/10th the time of the natural road. At first, the toll for individuals is far too pricey, so they disregard the iRoad; individuals are ok with the time cost of taking the natural road. But as time passes, more and more people begin taking the iRoad due to its undeniable benefits. Time turned out to be just one factor. Journeyers on the iRoad experience benefits like reduced health risk, less wear and tear, and an all around more comfortable experience. At some point, not taking the iRoad becomes of great consequence to individuals. Taking the natural road becomes no longer an option.

The benevolent iCompany has done a great service for humankind building this road that has completely changed the way people get to point B. And because the iCompany knows that other companies would love to travel this road just the same to cater to all its journey-goers, it charges them a hefty toll for access. It says, “anything you sell to people on this road, we will take a meaningful percent of, in perpetuity, forever and ever.”

Sellers on the iRoad grimace once at the terms of the deal, but sign nonetheless, knowing that not being able to sell to journeyers on the iRoad means their business will cease to exist. They can go and sell to riders of the natural road, but it isn’t enough to guarantee a meaningful existence.

Over the days, months, and years, sellers go through a whirlwind of survival challenges all to put on a smile of a face to its customers on the iRoad, whom they can only meet through frosted glass. Throughout all the changes, mutations, and evolutions, one thing remains beautifully constant: the iRoad commission. It is the axiom of existence on the iRoad.

Merchants on the iRoad have for years felt the commission too high, and an impediment to their survival. But what can they do? Fight the iRoad, and risk being barred. Avoid the iRoad altogether, and immediately perish. Build your own iRoad, and fail.

Building an iRoad is of course no easy feat. In fact, only two companies in the history of the world have succeeded in doing so. The other such road, the gRoad, exists parallel to the iRoad, and funnily enough, charges the exact same toll.

It’s almost as if these two roads have a monopoly over access to passengers traveling to point B. We can say this because:

  • Individuals can ignore point B at their own peril
  • Companies can ignore point B at their own peril
  • The only way to get to point B is via one of two roads
  • Building your own road is historically impossible and impractical
  • Both roads charge the same commission and are unwilling to negotiate
  • This commission is often seen as egregiously excessive

In non-monopolistic cases, there would be many, many more roads to point B. And because individuals can choose which roads to travel, these roads compete to a point where commissions and tolls are reduced to their lowest natural level.

In cases of monopolies, there is no competition. And thus no real reason to lower prices, especially for a good as important as access to point B.

There are two common arguments one sees over this epic battle:

  1. The Textbook Libertarian: "If you’re not happy with the fee don’t use the road." As mentioned, one cannot simply ignore this road. This response is equivalent to "Don’t exist", but I think things that exist want to stay in existence. So this is ultimately too nihilistic a response.
  2. The Textbook Retailer: "All roads charge tolls." Sure, almost all roads will levy a toll. The difference is that traveling most roads is optional. Point B however is special. Very, very special. So special that if you ignore it you will perish. And there are only two roads you can travel to get to point B. These two roads appear to act in unison to maintain what sellers deem unreasonably high tolls.

It’s extremely important to understand what differentiates this case from any other case where you can successfully apply The Textbook Libertarian and The Textbook Retailer:

Monopoly.

The constricting of competition.

The complete suffocation of choice.

A total hoax

A friend of mine, whose intellectual opinion I admire, recently told me that he believes the coronavirus is a hoax. Completely fictional. Doesn’t even exist. I said, lolwut? That this virus could be completely fabricated had never remotely crossed my mind to be in the realm of possibility. But, this friend of mine had been right about other complex topics in the past. So I lent him my ear.

The idea is that the virus, and the subsequent lockdown, is cementing power into the hands of a few organizations and screwing over poor people and small businesses (which, objectively, I suppose it is). And indeed, you find that with most conspiracy theories, this is also the case: the masses get screwed, and the powerful consolidate ever more power.

The inspiration for my friend’s ideas was a 3-hour interview on London Real with David Icke. I won’t link to it here, but I’m sure you can find it. David Icke is essentially the Alex Jones of the UK, whatever that happens to mean. But, because this message came as a personal recommendation from a friend, I promised I wasn’t going to judge a message by its messenger. Unique perspectives, historically, tend to come from outsiders and outcasts. So I suspended any judgement, and watched the video with a completely open mind. I’m not insecure about my ability to discern, so if I watched the video and I was convinced, then so be it, and if not, then I’d stand to come out stronger.

~~

My friend and I argue endlessly about the nature of conspiracy theories. He says, given any theory, you have to investigate the facts and come to a conclusion for yourself. Certainly hard to argue against. And I say, conspiracy theories are more a mindset, than about the particular details of an incident. I shout over him abstract structure and form, he shouts over me certain events and their peculiar nature.

Conspiracy theories are absolutely delicious, by the way. They make sense of the senseless, and connect disparate pieces of information in such mesmerizing fashion, that you think this mesmerization can only be attributed to its quality of truth. In my experience, the truth is rather ugly and incomplete, rather than perfect and whole. (Think religious narratives, and how uniquely complete and comforting they are, versus the rather grotesque nature of scientific narratives.) Above all, conspiracy theories reject chaos, and imply cause and intention behind the wildest of human events.

So how to explain the perfect nature of these theories and their undeniable deftness at compiling facts and presenting them in a timeline of pure symphony and perfection? Here’s my conspiracy theory on conspiracy theories:

Chaotic things happen in the universe, and in our world. The powerful are more equipped to take advantage of these events when they occur. For example, in the case of a contagious virus that is chaotic, governments can use this chaos to their advantage to overreact, if deemed beneficial. I think conspiracy theories, as a rule, tend not to necessarily modify event chronology (apart from the few that completely deny the total occurrence of an event), but to instead attribute intention and non-chaos as the aboriginal source of an event. Whereas chaotic events have a natural cause and a never-ending emanation of effect, conspiracy theories, or what defines them, tend to take an event that has had significant consequences, and retrofit causes, intentions, and strategies to ultimately imply a non-chaotic cause. Ultimately, “someone is in control,” rather than “it’s a wild, chaotic universe."

I think it would be more in the realm of possible logistics, based on what I understand about the chaotic nature of the universe, that the powerful are simply better equipped to take advantage of chaotic events that tend to leave the less powerful helpless. And these chaotic events tend to cement power into the hands of the few.

Assuming an actual deadly virus that, say, literally makes you throw up blood and kills you within 10 seconds of contraction, the powerful and rich will always, one way or another, be more insulated from something like this than the poor. And so events like these tend to make the rich richer, the powerful more powerful, and the poor poorer.

The classic example is 9/11. Conspiracy theorists would say, the attack allowed the government to expand its powers (Patriot Act, Iraq War), therefore, the attack was intentional, and designed to do just that.

Whereas non-conspiracy theorists would say, the attack was chaotic, but in that chaos, it allowed the government to expand its powers and to take exceptional measures.

In some or many cases, the government can simulate chaos to catalyze opportunity. Conspiracy theorists, as a rule, cannot differentiate between what is chaos and what is simulated, and err on the side of complete simulation.

~~

I watched the whole three hour video, by the way. The first half was relatively coherent. And I’m not going to lie: hearing an eloquent person say that this whole ordeal was completely fabricated made me feel really good. It was comforting. It was freeing. It made me feel like I knew something others didn’t. That I now had an advantage. But I also know that truth—natural truth—is rather grotesque, uncomfortable, chaotic, murderous, and random.

He spent the second half of the video tying human breeding with AI, cloud computing, Bill Gates, 5G, vaccines infested with self-replicating nanobots, fortune-tellers and psychics, demons, sacrificing the blood of children to the devil—he connected all these impossibly disparate pieces into one complete narrative that ultimately said: someone is responsible for making your life as shitty as it is. It’s not your fault, it’s not the universe’s fault: it’s the fault of a secret cult with Bill Gates, DARPA, Zuckerberg, and even Elon Musk at its masthead.

Poor Jack Dorsey got left out of the meetings.

Bullshit opinions

If a friend describes to you some weird random physical pain they’re experiencing, probably the best thing you can say to them is, “you’ll be fine.” It’ll pass. In most cases this ends up being true.

But imagine making a “spiritual” symptom checker website where the result for every input is “you’ll be fine” (rather than the present “you have cancer” minefield). You’d get harassed and bullied mercilessly for reckless endangerment.

The difference between the friend and the internet is that on the internet, everyone thinks you’re talking to them.

I’m not.

I think a valid response to disagreement on the internet is, “I’m not talking to you.”

If I say it’s nice and sunny today, and you say no, actually, it’s cold and windy where I am, it’s simply the case that I wasn’t talking to you, but talking to people who may agree with, or are able to empathize with, my perspective. Or perhaps share the same circumstances.

On Twitter, people attempt to speak to their finite followers. Not the infinite, never ending, ever-disagreeing masses. Tweets are forcefully ejected from their target audience through retweets, which is like having something you say to a small group of friends amplified to your entire town. Surely almost never what you want.

If someone says something on the internet, and you disagree with it, while even just one other person finds it agreeable, you have no more business interrupting that conversation than you do interrupting two people chatting arbitrarily ridiculous opinions in a cafe.

I say ridiculous shit to my friends all the time that I wouldn’t dare say on the internet. Not because I’m afraid to say those things, but because, I’m not talking to you.

To the whole wide world, I really don’t have much to say. Which is probably why I struggle to tweet. Who even are you, shape-shifting person reading my non-existent tweets?

I think Twitter, blogs, and social media, compared to say PhD dissertations, are fine places to post ridiculous opinions which you truly have conviction for.

If I tell a friend who complains of a tummy ache, "you’ll be fine," I’m a good friend. But in a tweet, I’d be a horrible person. If I tell a friend, “perhaps this lockdown needs to end and is causing more harm than good,” the friend either agrees or counters cordially. On the internet, you’re a horrible person. I suppose in this particular case, this horrible opinion of mine, spoken privately, goes to corrupt only one other individual, whereas on Twitter, I’m “corrupting” 34 million individuals.

I argue that someone who gains millions of followers on a play social media website is not suddenly responsible for changing the nature of their discourse. Certainly, for your own peace of mind, you should tweet with caution if you wield such influence. But there is no moral obligation for someone who did nothing but create a social media profile and gained a few million voluntary followers to suddenly align their opinions with those of health experts and the scientific community.

This case may be difficult to make with someone like Musk, but imagine an 11-year-old who gains fifty million followers and begins expressing, what can only naturally be, bullshit opinions. Ought this child complete a university degree before expressing any sentiment on current events? Or ought you to simply understand the context that an 11-year-old is saying something ridiculous not worthy of taking too seriously?

If you want accreditation, if you want peer review, if you want vetted opinions, this is not the domain of Twitter, nor Facebook, nor any other casual social media network. Perhaps a scientific journal has what you’re looking for?

If you want bullshit conversation, welcome to Twitter.

Welcome to the internet.

Slogan? Try not to get so upset about what you see.

Simulation overflow

Quantum is the proof that we’re in a simulation. That there is a dimension beyond our own, by which our own physical rules and laws do not operate. Entangled particles bypass the light speed limitation because their state is reconciled externally. We only see the resulting particle flips. Not the computation, like what other particles to affect in the global counter.

If a hundred-trillion light year wide simulation existed on a hard drive, the simulated particles are very far apart, but only inches apart on the physical drive. Far when simulated, flat when stored.

Why would a thing want to run a simulation? I believe for its own intellectual amusement. Think passionate science experiment. Or obsessed botanist.

If a thing could run one simulation, it’s likely it could run many simulations. And if it could run many simulations, it probably is.

If you’re a thing and you’re running a simulation, aiming for self-contained autonomy would be most intriguing, particularly so that you could observe many simulations at once, and monitor their behavior as labeled jars on a shelf. “This one has X, this one doesn’t.”

Does the simulation branch off at every point of binary potential? I don’t think so. A thing could likely run many simulations, but not infinite simulations. So it must optimize where and when simulations are forked. I believe this could be somewhat subjective. I also don’t believe a thing would want to inject hastened state or custom events into a simulation past its initial starting point, but instead prefer to fork a simulation based on an influential event. A thing would definitely want to fork simulations at the incipience of Hitler, for example, to see alternative outcomes. A thing would fork at other similar magnitudinous events, like 9/11, or Donald Trump. Or perhaps it forks at a point where one split would result with an x speed of light, and the other with a y.

Can simulations access other simulations? I wouldn’t think so. It would be impossible for a thing to keep simulations self-contained and uncontaminated if it creates a bridge between them. Although, perhaps some simulations have a bridge precisely for this reason: to measure its consequence.

If a thing can run many simulations, couldn’t there be many things running many simulations? I think so. Could we ever know for sure? If and only if this is something the thing is testing for.

Or perhaps a bug. An unintentional bridge. State reconciliation errors that leak information. Maybe the thing is sloppy.

I find it comically suspicious that we are unique in existing, on a stranded rock, in an otherwise infinitely empty universe. This fact alone seems very, very simulation-like. Were it not for this fact, I would honestly think it harder to have arrived at this conclusion.

Just as well, three crazy, infinitely improbable events all chanced to occur in an embarrassingly barren universe: one, it came to be. Two, simple organisms came to be. And three, creative consciousness came to be. These occurrences seem to have required careful—or perhaps luxurious—forking. There could certainly be other jars where these events did not happen. And perhaps there too are jars where more than the earth alone was inseminated. Nonetheless, the isolation is extremely simulation-like.

How similar are we to the thing? I think pretty close, in essence, or on our way. It would be most amusing to a thing if it could replicate its own essence through another medium, the same way replicating our own essence is intriguing to us. It has the potential to be a recursive feat. Is the thing in its own simulation? Likely. It wouldn’t know. And in that case, the deeper you are in the cycle, the further you are from base "truth". What does base look like? We’re not allowed to wonder.

If it’s recursive, why would things at every level act the same, have the same desires, and continue creating simulations? Perhaps it may be that we’re simply in the tree that resulted in an obsessive need to replicate consciousness, or the appearance of it. There could certainly be other trees that have stagnated. In that case, a simulation that continues recursing seems to be more impressive than one that doesn’t.

If we do end up creating a simulation that we deem fully autonomous and infinitely intriguing—perhaps, more intriguing than our own—that could also serve as sufficient proof we are in a recursive cycle.

Is there any use in believing we are in a simulation? Probably not. Unless it helps you conjure new theories. Or helps you imagine a new video game, movie, or novel. It may even compel you to write a meandering blog post masking science fiction as theory, shamelessly bordering on complete and total scientific blasphemy.

A year of pain, and some growth

2019 has been a strange year. In April, I underwent a retrospectively unnecessary surgery that caused me to suffer a level of physical and emotional pain, lasting more than six months, than I had ever experienced before. I went from being unrelentingly focused and productive, to not being able to summon the will to write a single line of code. I don’t want to give this excruciating experience any credit for where I have ended up today, so I will treat the resulting occurrences as purely incidental:

Productivity, coding, and burnout

  • For almost a three month period, Standard Notes sat completely still, in terms of feature development and to some extent, bug fixes. This turned out to be not such a bad thing. It taught me, above all, that things can wait. Surprisingly, during this long productivity drought, the company did not erupt in flames. Everything continued to function. New users continued to sign up, use the app, and pay for it. Others still sent in praise for what they liked, and condemnation for what they didn’t like.

    It also disarmed bug reports. I don’t panic anymore when someone expresses dissatisfaction with a feature or dis-feature. I don’t panic to build new features or iterate on new versions. I’m not in a constant frenzy. I also don’t work nights and weekends anymore. This is actually unusual for me, since nights and weekends were to me, previously, the only time I’d ever work on side-projects. In fact, in my first career position as a software developer earlier in the decade, having finally exhausted the course of my small-time indie projects that were to make me rich, I was shocked to find out that the company I was to work for had closed offices on Saturday and Sunday! I thought, what lousy dedication! I never not worked weekends, prior to that. If I wasn’t working, I felt like I was failing. This turned out to be a tough mentality to shed.

  • After I had sort of recovered emotionally, and to some extent physically, the two-and-a-half year period of relative unrelenting focus and furious productivity necessary to build the product finally caught up to me. I was burnt out. Usually when I burn out, I recover quickly. Maybe two weeks to a month, tops. But here days, weeks, and months passed, and I still could not summon the will to code or iterate. I did what was absolutely necessary but no more. I still loved Standard Notes dearly, and wanted to continue making it the best it could be. But if not me coding, then who? Ah! I must explore this thing they call hiring. And so finally, after many years of trying to do everything myself, I realized, I could not anymore. Me coding has become quite bad for business. If I’m coding, I’m not talking to users. I’m not thinking about business models or growth. If I’m coding, I’m not doing anything else. And coding can be an emotionally exhausting experience—you don’t want to walk away, or can’t be bothered, until you solve the problem at hand. It creates an introverted monster out of me. So I don’t code anymore. As much as possible. Standard Notes is now a ~6 person team, with a mixture of full time and part time people from around the world.

Hiring, culture, and remote-first

  • As far as hiring goes, it turns out you must actually make a decision on what kind of company you want to build: local, or distributed. It was mostly a blind process at first. I searched in Chicago for developers, because hey, that’s where I am. But it didn’t quite feel right. Do I really want to build a physical office culture, where I have to see people every day, and be an example of office excellence and dedication for them? Where I have to judge people by what time they come in and leave? Where I have to worry about how each member’s physical presence affects the other’s? Where I have to fret over which snacks to buy, and whether or not we have a ping-pong table, and what constitutes excessive ping-ponging? Na. That all sounds dead boring to me. I honestly would rather not have to babysit anyone’s physical presence. And as a self-proclaimed introvert, I’d probably do a lousy job at being there for people, physically. But in email and chat? Easy. Been doing that my whole life. And, it turns out, so have most of the people you’ll look to hire. So it works out. Local companies, all-in-all, sound like a huge hassle.

    What’s more, hiring locally is a huge constraint on access to talented people. Imagine you were browsing a website where you see a world map and tell the query box: “Give me the most talented software developers you can find—from anywhere.” And boom—the map erupts with red bubbles indicating the overwhelming amount of people that satisfy your criteria. But then you tell the website: actually, instead of searching the whole damn world, let’s limit this to a tiny 3 mile radius of people. At this point the website should, rightfully, ask you: mate, are you sure? What are you expecting to find with this query? But it obliges with your strange command, and filters the hundreds of thousands of results around the world, to like 5, in your local island-like radius. So yeah, local-first is quite strange.

    I have seen that “founders” (a word which SV/SF culture has tainted, quite honestly, but to which I cannot find a better alternative) who prefer local-first tend to be more interested in the idea of what a company should be, rather than optimizing for results and productivity. That is, they tend to romanticize the idea of building a team, and having everyone forcefully show up at some physical coordinate, whereupon they are all chained to a computer or white board for eight or nine hours. They romanticize the idea of having a ping-pong table or snacks, because they’ve seen that’s what a lot of rich companies do. They fancy themselves CEOs, founders, entrepreneurs—and that this typically involves being as ostentatious as possible. Whereas, if your primary focus is building great software, it doesn’t really matter how or where it’s done.

  • As to how to find people to hire, this at first brought great pain and befuddlement upon me. I thought I had to start networking, god forbid. The first revelation here was, duh, a job posting. So I tried the various remote job posting sites. This was overwhelming, as I got hundreds of emails, but hadn’t the slightest clue how to filter incoming candidates. I would exclude backend developer candidates based on the UI of their resume, or if they sent it as a Word document instead of a PDF. Fast forward a few months to where I have filled all the positions I was looking to hire for, and it turns out: I’ve hired 0 people that came from job postings. Instead, all the people I hired came from either: the SN community, prior Twitter interactions, and prior work interactions. More recently, I created a jobs page on our website, and I’ve been getting great leads from there. Really, really great leads. Not as abundant in quantity, obviously, but very high in quality. And laser targeted candidates of course, given they’ve had enough interest to happen upon our homepage in the first place.

Habits, lifestyle, and tweeting

  • While it’s a topic that’s always a bit difficult to talk about, I can feel some slight comfort being a little more honest here given that the state I am living in is legalizing marijuana on January 1, 2020. While the creative benefits marijuana confers can be at times undeniable, and thus, can have a dependency-forming effect (kind of like shaking an empty bottle to death so that you get every last drop out of it), I’ve formed better habits here in 2019. I’ve gotten to the point where I just don’t enjoy it as much. It’s really good for problem-solving, so has become more of a tool when necessary, than some sort of fun-box that provides entertainment on demand. It’s really not a toy. It’s a tool.

  • I still have not figured out how to write more, or tweet. On my personal account, I’ve tweeted only a handful of times in 2019. Tweeting remains impossibly awkward for me. I’ve never quite figured out how to be the type of person that has 79k tweets. I look at those people in awe and confusion—how!? On the one hand, people who tweet that much clearly have a level of spontaneity and lack of GAF about what other people think, which I hugely admire. On the other hand, every tweet to me feels like an insistence of yourself and your ideas upon someone else. They’re essentially brain farts, but are treated by their authors and followers as some sort of divine arrangement of letters. A lot of Twitter is reacting (or, overreacting) to current events, which I do too, but—and this is honestly not a humble brag but something I ultimately dislike about myself—I can’t hold on to an opinion too firmly. No opinion lasts with me more than a couple hours before I ping-pong between different sides of the story. I’ll try to have an opinion agreeing or disagreeing with some narrative, but then my mind will be like—have you considered the other side of this? And so on. The result is that I simply do not have any opinions that survive a night’s sleep. There is just way too much information, and it’s impossible to consume all sides of a story. The only solution for me has been to completely sit out current events, lest I end up in some infinitely recursive cycle of digging endlessly deeper till you realize, shit, there’s no right answer here. It’s much more complicated than you could have ever imagined. So yeah, my dream of being a “100k tweets” person lives to die another day.

Books, games, and arbitrary lists

Those were some words. Good.

It was hard to write about any of this stuff as it was happening, because it was all sort of brewing. But a year end review is a nice writing prompt. As far as progress goes, there’s really no more short-term low-hanging fruit. Everything I’m embarking on now requires the patience of watching a tree grow. 2019 was a tiny branch that today I saw protruding, and thought, hey, there’s something.

The imagined world

An idea is a story. A story about how the world could be. Great ideas are often described as having an almost ethereal source. Beyond the mind—as if the mind were a receiver, and not a generator. Some people think, I’m not an ideas person. They just don’t come to me.

But, and apparently like every other damned thing in this world, ideas appear to be nothing more than stories. They fictionalize the present, and imagine what an alternative could look like. You don’t have an idea for an app, or a website, or a service—you imagine a world in which that service existed. You create a story about how the world would look with your invention. You imagine the fame and glory it will bring you. Your consciousness submerges in a flash flood of thought and creativity, and you emerge after it all with a wild look about your face. A wild idea has appeared, from whence unknown! But really, you just told yourself a good story.

Nations, religions, and cultures are stories of the collective human mind, a la Sapiens. But I think so are products, and apps, and websites, and services. They are stories first and foremost, with the physical manifestations appearing soon after.

A year or so ago, Dropbox released a huge redesign of their brand. Their new visual design and story communicated something along the lines of: We are no longer a folder syncing company. We are a collaborative solution that enhances creativity and efficiency amongst teams. They rolled out this messaging across their entire digital presence, including website and social profiles, but, their product remained exactly the same. Quite literally nothing had changed in their actual interface (yet). And I thought, what a con. Who are you fooling? You’re not a creativity-inducing company. You’re a folder.

But I think now I admire what they did. They told a story about who they wanted to be. The problems they wanted to solve. And though they were not that today, they knew it was who they wanted to be tomorrow. First you tell the story. Then you build the story. It’s a technique that has worked wonders for, dare I say, the greatest storyteller of our generation: Elon Musk.

Expectation and reality may not always meet, but the only way to keep advancing and innovating is to keep telling more innovative and creative stories. Reality follows, with some delay.

Like Air

At one point, lack of freedom feels like a lack of air. It's total suffocation. But at another point, freedom becomes like air. It's something you notice only in its absence. I've become wealthy recently—a gazillionaire of time. I wake and sleep as I please, and roam space with no one to appease. Employed me, a few years ago, would fantasize almost erotically about the freedom to do one's own thing and build one's own product and answer to no one but one's own self. But like a suffocating human who at a point wishes for nothing more than air, and would be eternally grateful to receive it, freedom evades appreciation no sooner than it arrives, were you to even take notice of its presence. What you acquire, like air, like freedom, is used at once as a building block to your next desire, and so on.

Reality is a simulation in that the same story plays out endlessly. It's not you that wants freedom, it's a certain few chemicals in your mind. It's not you that wants to scale your company 10x or take on bigger challenges, it's a tempest of chemicals in your mind.

It's not that the outside world is necessarily a simulation. It's that your desires are being simulated.

Desires typically one-up themselves, so that reaching your next goal requires broader thinking and deeper strategy. Playing the desire game is what we call growth. And I think it may be beyond culture, but of biology itself. Inescapable.

If our desires are simulated, then does it really matter whether you choose X or Y, or neither? Let's say X will lead to less growth but a more peaceful life, and Y will result in a catapult towards scale but more responsibility. Does it really matter which you choose, if the desire engine runs on full blast either way?

I used to think Elon Musk was absolutely nuts for taking on such big problems. Don't you want to sleep soundly at night? But probably, most likely, I'm not too sure, him and I sleep just the same.

It would seem that if you're going to suffer either way, might as well suffer towards your most stimulating ambitions. Like Elon, your "peace of mind" seems really to be a false factor to consider in your plans, and may end up inhibiting the utility and scope of what you create.

What happens when an AI learns to read?

There's something old-fashioned about trying to predict the future. I get a little uneasy when someone says "if it's like this now, imagine what it'll be like 10 years from now!" I feel a sense of robbery happening on the part of the future. A modern person attempting to predict the future conjures fantasies and prophecies as quaint as a first century prophet. Although I too can't help but let my mind run with seemingly autonomous calculations that assume a future value given a present value, I find it not respectful enough of the complexity of the human system. And were I so keen at this skill anyhow, I'd have made a fortune in the markets.

Predictions of the future are so prevalent as to be quickly forgotten and overrun by their never ending onslaught. By one interpretation, the thousand newspapers that encompass the likes of the New York Times are precisely in this businesses of interpreting present values and assuming their future state. And it's why I feel a sense of wariness when I encounter statements. I'd prefer articles contain more question marks than periods, as that would surely be the true factual nature of any complex situation. Yet rather than asking my permission to install new software, sure-of-themselves statements and predictions feel as though I visited one of those shady websites that immediately begin a download upon the page first loading. It feels dirty.

The most prevalent issue on which we let our mind run unbounded is AI. Can you imagine how smart algorithms will be if they're this smart now? Ah, the human and their unrelenting thirst for exponential growth. Of course, we have no reason to be anything other than optimistic. Just look at how quickly we went from brick-size satellite phones to edgeless "retina" displays. So sure, one way to interpret this would be that we'll have actual retina implants in twenty years if we continue at this rate.

But what of the respect for limits? For miscalculations? For failure, bankruptcy, and politics? What of the respect for the complexity of biological organisms? I could just as easily imagine a future in which we come to realize that perhaps machines are not as capable of self-learning as we thought. We've been riding under the cool assumption that computers can do things faster than humans can, so if an AI learns to read and understand what they read, then they can theoretically read all the books ever written in a single second, and boom—there goes the singularity.

But when have we ever been right about predicting the future? What if the human algorithm turns out to be a slow one, with no physical capacity for performance increase? Yes, a computer can do things a trillion times a second. But in that time they calculate nothing more impressive than the location of an item in a database, or the weight of a neural node. A single Google search consumes 0.3Wh of electricity. I saw an Alexa commercial recently where a lady wakes up from her sleep in the middle of the night after hearing a startling sound, and wastes no time in asking her intelligent AI assistant "Alexa, what the fuck time is it?" Nice. Surely, no short of a billion calculations must have occurred for Alexa to give this helpless human the time. Less than a second of computation time, sure, but still, at least some 300ms.

So what does this technology at scale really look like? An AI that one day snaps into consciousness and assumes all human knowledge in a fraction of a second? Or more like a cryptocurrency network that must balance computational complexity with convenience and accessibility? If I had to let my mind wander, I'd assume the future plays us all, and takes on some shocking twist of realizing some human-brain-speed-limit for computations of any medium. We'll build an AI so advanced that it can read and understand with unprecedented accuracy, but still take two days—a full 48 hour's worth—of computation time to read a full book, faring no better than a high school student, and alas, postponing the human fetish for looming singularities.

It took Elon Musk billions of dollars and several years of attempting to build car-making robots before admitting that humans are underrated, and assumed an updated stance involving higher human collaboration in the process. And yet if you do find yourself in one of those Teslas and happen to turn on Autopilot going 80mph on the highway, the folks at Tesla like to remind you: never take your hands off the wheel.

The Top Shelf Principle

Say you have before you a kitchen cabinet with three shelves. On the top shelf you have your most delicious snacks and delicacies. Chocolate chip cookies, crispy cheetos, and frozen pistachio gelato. In the middle shelf you have snacks that are "not bad", but not the most scrumptious. Maybe some beef jerky, plain pretzels, and a granola bar. On the bottom shelf, you have your survival snacks. You wouldn't eat them unless you were starving. For me that'd be plain almonds.

I've found that when I'm in the mood for a snack, my hand will always reach for top-shelf items. If the cabinet is stocked with soft chocolate chip cookies and spicy potato chips dripping with oil, I'll never reach for the almonds. The end result was that almonds never got eaten. In the presence of top-shelf items, almonds just didn't seem delicious enough. They were boring.

But I found that as soon as all the delicious top-shelf items ran out, and all I was left with were mid-shelf items like plain pretzels, the plain pretzels began floating to the top. They became a top-shelf item, and reaching for them became relatively instinctual.

The top shelf principle is thus:

  1. Options, not just in snacking but in any domain, tend to sort themselves by most satisfying first.

  2. On average, you will choose items sorted higher in the satisfaction queue. And anecdotally, what is most satisfying in the short term is typically not what is healthiest in the long term.

  3. The amount of will-power and discipline required to choose an option increases with its sort order in the satisfaction queue. That is, the first item—the top-shelf item—, will require very little will-power to act upon. Items towards the end of the queue, however, that are less satisfying but probably healthier, tend to require large doses of long-term thinking and discipline.

    And most importantly:

  4. Options do not possess an inherent satisfaction value. They are always relative to one another. In the absence of a historically top-shelf item, items lower in the queue will surface to the top and themselves become top-shelf items.

In a queue of cheese puffs, chocolate chip cookies, and plain almonds, almonds sound mundane and unappealing. But in a cruel hierarchy containing expired milk, uncooked rice, and almonds, almonds will quickly sort to the beginning of the queue and become heartbreakingly delicious. And you will not feel ripped off for eating them. You will derive more or less equal satisfaction from them as you would any historical top-shelf item.

This principle has been useful for me in snacking, sure, but has served me far greater in its application towards lifestyle addictions. My lifestyle cabinet looked like this:

Top shelf:
working, checking some sort of digital feed, like reddit, or twitter, or instagram, and playing video games

Middle shelf:
reading a book, watching a movie or show

Bottom shelf:
socializing with people in real-time, house chores

Naturally, I was doing a lot of top-shelf actions, but hardly any bottom-shelf actions. And I had developed a fatal misunderstanding towards bottom-shelf items: I had thought I hated socializing in real-time because it was inherently unsatisfying to me. I had qualified myself as an innate introvert with no capacity for change. In reality, it wasn't that I disliked socializing—it was that I enjoyed playing video games more. And with the options of playing video games or checking my phone always available to me, I almost always acted on them first, leaving whatever crumbs of waking capacity (usually none) to items lower in the queue.

I observed this in the children of family members: if you gave them an iPad to play with, they weren't going to say no. And when they do get their hands on it, they lose themselves so deeply into the digital world, that they are mostly unavailable in the real one. But take away the iPad, and a remarkable thing happens: they find something else to do. Sure, they might throw a momentary fit, but a kid is a kid, and will not let one second pass without finding some way to entertain themselves. In these cases, where the top-shelf iPad was removed from the equation, items lower in the shelving system, like two blocks of legos, surfaced to the top, and the kids began playing with them with as equal voracity as the iPad.

As for me, a grown adult with no seeming need for personal order or control in time spent facing a digital device, I wanted to reduce working, checking feeds, and playing video games for one reason: RNG.

Developers know RNG as a random number generator. In the video game world, gamers refer to the acronym simply to refer to "randomness" in a game. Random or not, in the course of playing video games, you are bound to lose. Especially in a networked game where you play against other real people. Losing, in a word, sucks. It's a very sharp and gutting pain. The pain lasts only seconds, but stabs like a knife. Losing can be especially painful when it happens in a game you love; one which you've been working hard to better yourself in.

For me, this game was Rocket League. I'd been playing almost every day for a year and a half. When I'm winning, it's pure ecstasy. When I'm losing, it's pain coupled with RAGE, depending on how bad the loss is, or how futile I feel playing. You tell yourself, if I keep playing, I'll get better, and I'll lose less. Of course, that's a lie. You won't ever lose less, because as you get better, you get matched up against people who are also getting better. The result is that you're always playing against like-minded people.

The tragedy comes into play thusly: whether you win or lose in a digital, fast-paced game is largely random. The games themselves aren't random, but the interactions you have in the digital world with other people are more or less unpredictable. In a game of Rocket League, two players may fly towards the same ball, at the same time, and a thousand factors will determine which way the ball goes. This interaction is literally called a "50/50" in Rocket League, because it's almost inherently unpredictable. The problem is, if winning a game is very important to you, and victories are decided by these chaotic interactions, then you leave your emotions to chance. In my experience, the emotional aftermath of winning or losing could last a couple hours. That meant that every day, there was a 50% chance that around 1PM, I would feel like shit for the next two hours. And guess what—I did. When I had a losing day, I would be in such a bitter mood, that I felt like doing nothing but languishing for the next few hours.

Same with work: if my emotions depended on how little or many bug reports I'd receive when I open my email inbox, or how much traffic and sales the previous day had generated, then I was leaving my emotional stability in the hands of chance. Of course, these figures tend to form averages over time, but on a day-to-day basis, you never quite knew the shape or form of what was to come. I used to have work email and notifications make it directly to my phone lock screen, so I was always in the know. In other words, I danced with chance at every turn of the wrist. Sometimes, good news would light up my phone, and with it my face. Other times, definitively the opposite. The short of it is that I now only check notifications, of any kind, once a day in the morning. Otherwise, my phone is completely devoid of notifications and accounts of any kind.

Lastly: feeds. By feeds I mean digital applications that offer feeds that constantly change and offer you something new. Reddit, Instagram, Twitter, Facebook, and the like. Feeds became dangerous for two reasons: 1) RNG. You never quite knew what you were going to get, and whether it would upset you or make you happy, and 2) the mere act of refreshing feeds became instinctual. I could be standing in line, or walking from room to room, and reflexively reach for my phone to check some feed, and in the span of 5 seconds, bounce from app to app pulling-to-refresh, for no apparent reason whatsoever. Pulling to refresh had found its way to my top shelf.

I had first witnessed the top shelf principle in action in my very serious ordeal with snacking, and later with kids and the presence or absence of an iPad. So I thought to myself: if I completely ransacked my top shelf, and disposed of all the items I'm habitually inclined to, what would happen? Would I go mad with idleness? Or would I find something else to do?

I unplugged my gaming PC. I disabled all notifications from my phone. I wanted it to be so that every time I checked my phone, there would be no notifications. This way, I wouldn't even have to check. I would just know there wouldn’t be any. In the midst of social or family events, I completely turned my phone off. I didn't want to run to it when I felt bored with conversation. I wanted to push past boredom to see what lay on the other side.

The result has been as anticipated by this grand pseudo-principle. In social situations, not cowering to my phone has led me to find other ways to entertain myself. And it turns out, conversation can be quite entertaining. Who knew? Of course, in the presence of video games, conversation wouldn't be, but stranded with no other options, you find a way. It's sort of like the cliche of the shy person in a party retreating to the corner and checking their phone, to seem like they're doing something, as to avoid socializing. In this case, I now know the solution to this problem is shutting off your phone entirely, or leaving it behind, so that you have to socialize. When you have to, you will. And you'll do it well too, if for no reason other than to thoroughly entertain yourself.

Not having video games to reach to, great blocks of time have opened in my day. And as sitting and not doing anything is quite literally undefined, I always found something to do. I began reaching for the almonds-equivalent of real life. I began reading more, whether it be a long session falling down the Wikipedia rabbit hole, or 21 Lessons for the 21st Century, and now the very compelling The Gene. (Did you know that in the 1920's, in the United States of America, "colonies" were set up to aggregate "dumb" people and sterilize them so they wouldn't reproduce? Approved by the U.S Supreme Court and everything. Culling the "weak" was just a trend amongst nations, including Nazi Germany, amidst new discoveries and interpretations in genetics.) When I grew tired of laying with a digital device, I put it down, sat up straight, and contemplated my next move. "Well, I can't play video games. I don't have any digital feeds to get lost in. And I'm not going to sit here and do nothing." So I got up and did the dishes and cleaned the kitchen. I tightened a loose door knob. I did some other repairs around the house.

This is week three of this strange experiment. And I kid you not—finding a chore to be done has been as exciting a prospect as playing a game of Rocket League.

The only problem is, I'm all out of chores.

Evil algorithms

A world in which advertisers know your every interest is scary. But a world where entrepreneurs build products no one ever hears about is even scarier.

A few years ago, I bought a pair of $60 Nike shoes. They were thicker than your average modern Nike shoe, and much taller, reaching just above the ankle. They were great for moving around, playing basketball (when I did that), and just sort of general every day use. And as they started to deteriorate, I began looking for the exact same pair to replace them. But no matter where I looked, they could not be found. They seemed to be a much older model, and shoes apparently don’t have specific names, so you can’t really look them up. I looked for about a year on and off, both physically and online, but could not find any pair with the same style and attributes.

About a few months ago, Instagram, having picked up on my interest in finding my long lost soulmate of a shoe, sensed it might be able to help. It offered me an advertisement of a pair of shoes remarkably similar to what I was looking for. I ignored the ad the first few times, but it kept following me. I refused to interact with it. My ego would not allow me to purchase a product from an advertisement. Eventually, I relented, and I bought the shoes. And my consumer hungers were thoroughly satiated.

Over the next few weeks, Instagram began showing me more ads of similar products. I wasn’t on the market for any more apparel, but I was intrigued at all the new brands I was discovering that you couldn’t find in stores. And it turns out, there are, in this case, countless fashion and design brands who do not have a physical presence, that make products which exceed the quality found in stores tenfold. And so Instagram learned a little about me, and I learned a little about other companies that Instagram thought I may be interested in.

Acquiring these shoes made my life better by the amount you’d expect a pair of shoes to better your life by. But, it did satisfy a need. Both on my end, and on the entrepreneur’s end. A neural connection was made. Demand was satisfied by supply, all through the power of the all-knowing internet. And I could not help but ask myself, is this such a bad thing? That entrepreneurs can make products and reach exactly the kind of people that would be interested in them sounds not so much a bad thing, but perhaps one of history’s most difficult, unsolved problems.

Because if you can complete that loop, of entrepreneur to customer, then you can ensure consistent economic activity and prosperity—for you, the entrepreneur, and society at large.

And I thought it would be wild, if instead of advertisements being these evil, demonic, invasive things (though they sometimes are), they are instead a testament to our advancement. A demonstration of the ingenuity of human problem solving. They are human society at its best.

Because if every dollar you earned was hid under your mattress instead of spent, economies would falter. Society could not prosper. And while many—perhaps even the majority—are still neglected by the economic gain that consumerism has conferred, there is no doubt a rise of possibility available that was not before. My first reaction to consumerism is always one of disgust and repulsion. “Companies create demand for products no one really needs through manipulation and association”—how appalling! It must be avoided at all costs! So fine. Then earn your money, and keep it in your bank account. Don’t give a dime to these greedy entrepreneurs.

Who has benefited then? Not you. Not them.

Consumerism seems to be an engine of growth, needless as it may be. It creates reliable, consistent economic activity—the foundation of stable societies. Which is why wherever you find developed countries and cities, you find consumerism.

Perhaps...perhaps we are beginning to make progress on one of history's greatest unsolved problems?

No doubt, there are proper ways to go about this, and improper ways. But the two will be perpetually inseparable. All this to say—and mostly to myself: Don’t sweep the entirety of "economic algorithms" under the rug. There is good happening just as well.

Play the game

When I was just a bit younger, I had dreams of becoming filthy rich. I wanted to do things big. If I were to found a company, it wanted to be a 500-person company. Hundreds of millions in revenue, headed straight towards an IPO.

As I grew older, I found it more sane to focus not on size, but value. What problem do I want to solve? And how can I best engineer a solution? Numbers and scale became irrelevant. A lot of it was philosophically backed. We are constantly told to be happy with what we have. That “this is it”—if you can’t find contentedness with what you have now, you never will.

And so I took that wisdom to heart. Besides, a life of glamor doesn’t seem all that appealing, given we can now live out others' lives vicariously through their social media profiles. Being rich and famous seems like a whole lot of trouble. Simple, humble, and inconspicuous—that seems to be the way to go. But there’s something the buddhist zen masters won’t tell you:

It’s dead boring.

It’s dead boring to be ambitiously unambitious. It’s dead boring to optimize your life around peace and simplicity.

And I’m starting to think…life was never meant to be lived simply. Unending complexity and scale is the basis of all life, matter, and movement in this universe, and yet we devise stories that say: want for nothing, and you shall attain happiness. Let us really quickly say that happiness is a nothing. It’s just a word. It describes a state of mind, maybe, but even then, chemicals are fleeting. There is no fixed chemical state of mind. It’s always brewing up something new.

So then, this idea that wanting less leads to happiness—it’s just an idea. It’s just a story. It’s an experiment. And ultimately, I don’t think it’s founded in any real universal truth. In my experience, it’s been quite the opposite.

I talked in a previous post about the game Factorio, and how I had a flash addiction to it. It is, by all means, the perfect game, and is exactly what I was looking for: something I can get lost in and sink a large amount of hours in. A sort of escape. And it would have been just that were it not for one thing: I wasn’t ambitious enough.

The game is about mastering the engineering of scale, and your output is directly proportional to your ambition. But here’s the thing: if you apply the zen mindset of “I already have everything I need,” then the game is instantly over. There’s literally no more room to keep playing. And that’s exactly what happened:

I stopped playing a game I really loved. Because I saw scale as an evil. I saw the accumulation of wealth, material, and prominence as an evil.

Since then, I’ve downloaded about a game a week to try and find something I can fall in love with the way I fell in love with Factorio. No dice. I can’t get captivated.

So what have I gained, by being zen? Nothing, it seems. Instead, I’ve lost something I really loved. Zen teaches you not to play the game, but what if the game is all there is?

I’m starting to believe that may be the case.

In the past few weeks, I’ve tasted the result of this slimmed-down zen philosophy: support emails and bug reports for Standard Notes are lower than they’ve been in quite some time. This was exactly what I wanted. I wanted to build a product that was so simple, that bug reports would not exist. Support emails would be minimal. And it seems...I’ve done that? Don’t get me wrong—still lots more work to do. But if this was the grand goal, which I thought would take a decade, and I’m already seeing a preview of what it’s like, then my human mind can’t help but think: what’s...next?

My zen mind says: nothing’s next. Enjoy this. My game mind says: move, scale, grow, build, act, collaborate, accumulate, and ultimately: play. Play the game.

I think…I think the zen story is a fiction. I think minimalism is a fiction.

I think life is a game, and it’s meant to be played. You can definitely avoid a lot of problems and minimize your burdens by sitting the game out. But that takes us directly to my favorite high school motivational poster:

A ship in harbor is safe, but that's not what ships are built for.