Research & Analysis of Today’s Internet
noun: a person who waits until the last minute to cancel social plans in order to choose the best possible plan for oneself at the expense of others.
My friend is the worst social optimizer; he canceled our dinner plan after I did all the shopping and cooking, and now I see him partying on Instagram.
I have an extra ticket for this show. Do you wanna go? I’ve been optimized by Kathryn. Her boyfriend just invited her to his place.
What?! You are not coming to my party? Are you trying to optimize me?
This problem has become prevalent because most of us have gotten used to using asynchronous modes of communication like email, texting, and Facebook. We don’t respond to requests for attention as they come, like we used to when the phone rang. We wait until we are ready to respond and when we do, we process them all at once. Asynchronous communication tools allow us to optimize our lives.
But not all of us work this way. Those who are guilty tend to have jobs that require singular focus, like writers, video editors, computer programmers, and graphic designers. Because losing focus and regaining it is very costly for these professionals, they tend to love asynchronous modes of communication. They do not want to fix anything in their schedule because they do not know when they will be done with their tasks. They move onto the next task whenever they complete the first task, not after a fixed duration. This is partly because their value lies in their products, not in their time. Spending 40 hours writing an article as opposed to 10 does not guarantee that it would be better.
But those whose job is managing people do not operate this way. Their value lies in their time. If you expect to be pulled from all directions at all times, and if you do not have a task that requires singular focus (or “the flow”) until it’s done, then it’s relatively easy to organize your time on the as-requested basis. People like doctors, therapists, teachers, project managers, and CEOs are used to making appointments and ending their meetings as scheduled (not when the goal is complete).
Although both types appear to be optimizing their time, the former camp is actually optimizing their products or the results whereas the latter camp is optimizing their time. So, we could call the first type “result optimizers” and the latter “time optimizer.” I think we can see then why the first camp is more likely to be social optimizers. They are trying to maximize the fun. The latter camp is used to not achieving the best possible result within the time allotted, so they don’t think much about the result when booking an appointment, and are more accepting of the outcome whatever it may be.
Today, one in three marriage is initiated online. It is inevitable that this ratio will keep growing. A few decades ago, television was the most influential medium and the most popular target of criticism for corrupting our brains. Now, social media have unseated television from the top spot. Between the two mediums, the most obvious difference is that television is a one-way mass medium whereas social media like Facebook is two-way. Another difference: Television is highly centered whereas social media is highly fragmented. By “centered,” I mean a large number of people share a small number of contents. Water-cooler conversations are not as common as they used to be because we no longer watch the same show in the same evening. Social media is fragmented, or de-centered, because each person’s newsfeed is unique. The question we all would like to know is: What is the message of social media as a medium? That is, how is the medium changing us, our behavior, and our world?
The social circles we form online is different from our real-life communities in a number of ways. For our analysis here, let’s call online communities “tribes” and reserve the word “community” to refer to real-life communities. Firstly, social media removes the physical limitations of our social life. Online tribes are decoupled from geographical proximity. Our Facebook friends need not be living close to us. Secondly, online tribes are not centered around physical activities but around mental activities. Thirdly, the frequency of interaction is much greater within online tribes. And, lastly, tribes communicate mostly in written forms whereas communities do in verbal forms.
Because of the physical proximity, real-life communities bond over more practical and physical needs and desires. This is particularly true if you have children since community support is crucial in raising well-socialized children. Online tribes bond over more ideological needs and desires, like political beliefs. In other words, social media is not only changing the way we form groups but also the purpose, or the content, of the group. In real-life communities, our physical needs are prioritized over our ideological needs, so we avoid talking about politics because we would not want to undermine the integrity of the community with ideological differences. Within online tribes, the priority flips; maintaining the ideological integrity is more important than what we might need or want practically or physically.
The shift from verbal to written forms of communication changed the nature or the purpose of our socialization also. When we write, we express different aspects of ourselves. Many people on Facebook appear vastly different from their real-life versions. We are often surprised by the values people express online. We might find that our dancer friend is deeply involved in the 9/11 conspiracy theories, or our accountant friend is a Buddhist, or our children’s math teacher is gay. Some students are rejected from colleges based on what they said on social media after they had already been accepted in person. That is to say, colleges see something more real and true in the expressions online. These surprises occur when people we meet through real-life communities are observed online. Naturally, the opposite can happen too, like when people we meet through online tribes look and sound different in real life.
So, we need to ask ourselves: Which versions of ourselves are more influential in today’s society? For online dating, the first impression is entirely formed online. We interact with our Facebook friends and LinkedIn colleagues much more frequently online. Even if real-life versions are still more influential, it’s only a matter of time the online versions will be more influential. What this also means is that real-life communities will be less important than online tribes. We are already seeing the gradual destruction of local communities.
Recently, there has been much criticism about the “echo chambers” and “bubbles” forming in the social media. One might believe this is an effect of fragmentation—like the countless cable channels and websites reducing the amount of shared knowledge and experience—but fragmentation by itself does not necessarily cause ignorance. Fragmentation alone would simply lead to specialization like different college departments, and it has been happening long before the emergence of social media. Specialists may lack knowledge in other fields but they are not necessarily ignorant. Ignorance arises from misunderstanding or unawareness of ideological differences. Echo chambers and bubbles are problematic because they promote ignorance.
A college department, for instance, is homogeneous in terms of knowledge but diverse in terms of ideologies. Real-life communities too are ideologically diverse because what unite them together are physical and practical needs. Online tribes naturally gravitate towards ideological homogeneity, which leads to echo chambers and groupthink. Why?
We are social creatures. We have to have a sense of belonging and build an emotional support system. Given that our local communities are disappearing, we have come to depend on our social media to replace them. Here, I’m using the term “social media” broadly. For instance, email and texting are included in the definition because we can send email or text to a group of people to create a sense of belonging.
For most people, Facebook is an approval-seeking system. The main attraction is the “Like” feature. Many people pride themselves on how fast they can amass likes, and they have come to rely on their own tribes to “Like” their posts regardless of whether they actually like them or not. It’s a team-building or social grooming ritual. It’s very much like two people saying to each other: “You look great!” “You look great too!” The point isn’t to objectively evaluate each other’s looks. They are just scratching each other’s back. The merit/substance of the content is secondary or irrelevant. The lack of a dislike button also encourages the use of these platforms as an emotional support system. Anyone who expresses ideological disagreements is seen as a threat to the integrity of the support system.
There are social media platforms with dislike or down-vote buttons, like YouTube and Quora. They are not conducive to building an emotional support system. The comments on these sites tend to be far more critical and blunt. Therefore, it’s not that social media inherently encourages the creation of ideological echo chambers. By designing it differently, we can encourage or discourage them. However, we cannot blame Facebook for encouraging echo chambers. Something must make up for the disappearing local communities. If not Facebook, another platform would simply take its place. This is why, I theorize, online tribes have a strong tendency to become emotional support systems.
If what unites a group of people is a certain set of ideologies, disagreements that implies differing ideologies would not be allowed. For instance, a Republican wouldn’t be welcomed into a tribe of Democrats. Ideological alignment is frequently tested through numerous posts, likes, and comments. The closer they align, the stronger their sense of belonging.
To some degree, these ideologies are an emergent phenomenon. Just like a flock of birds who appear to coordinate their movement without a leader, these ideologies emerge without anyone leading the way. Sometimes a person might say something off the mark, and would quickly learn his mistake from the significantly lower-than-expected number of Likes. He would then correct his way. The immediate feedback mechanism allows the tribe members to quickly align with each other. The more accurate the alignment is, the less tolerant the members will be for any misalignment. Because social media is highly efficient, the accuracy of this alignment is increasing, which means the intolerance for misalignment is also increasing.
In a way, each member is acting like a politician who has no convictions of his own and says whatever his constituents want him to say. If he has a particular belief that contradicts the belief of the tribe—say, he happens to love Woody Allen but his group is appalled by him—then he would keep quiet. He would not go out of his way to express his admiration for Allen, because the point of an emotional support system isn’t to debate, learn, or arrive at any truth. In some cases, he may not know what his tribe thinks of a particular issue. For instance, he might not know what the others think about the sexual harassment allegation against Al Franken. If so, he would likely remain quiet until someone else breaks the silence, or until some reputable publications like the New York Times agrees with his view.
Social media played a significant role in the last US presidential election. Real-life communities did not have such a strong impact on politics because, for the most part, people did not (and still do not) talk about politics with their neighbors. Because online tribes are ideologically formed and reinforced, their impact on politics is inevitable. Online tribes are performing two roles that have conflicting motives: social grooming (emotional support) and facilitation of socio-political discourse. There is no inherent problem with exchanging Likes for cute pictures of babies and pets, funny jokes, and happy moments on Facebook. These are part of social grooming we do in real life also. But because the central binding force of a tribe is a shared ideology, we are compelled or even forced to align our ideologies with those of our tribes in order to protect the integrity of our support systems. This has a politically coercive effect. It amplifies political correctness towards whatever each tribe considers “correct.” The threat of ostracization suppresses independent thinking. When someone posts a political article, the other members of his tribe are expected to show their agreement or approval. The correct answer/response is always implied in the post. If anyone deviates from that expectation, he is deemed clueless or attacked for noncompliance because a disagreement is an affront to their well-being. On a platform for social grooming, people will naturally expect reciprocity and compliance.
I believe the amount of dopamine we can get from each other has increased substantially because of the efficiency and immediacy of social media, which is leading to frustration and even anger if the expected amount is not delivered by our friends. Because we are not consciously differentiating the two roles social media can be used for, a disagreement is perceived as denial or withholding of dopamine, a breach of trust.
We have now come to depend on social media for our sense of belonging. Once we are dependant on something, we lose objectivity and freedom. We become intolerant of anything that threatens it. Every disagreement turns into a small existential threat. It is like someone who tells you that smoking is bad for you when you are addicted to smoking.
Today, many tech-savvy businesses use project management systems (Basecamp, Asana, Trello, Zoho), customer relationship management systems (Salesforce, SugarCRM, CiviCRM), marketing and sales management systems (HubSpot, MailChimp), and e-commerce systems (Magento, Shopify). But each of these apps lives in its own universe. Each requires its own username and password, so you are constantly logging in and out of different applications. They don’t share data with one another, so there is a lot of redundant data you have to manage and synchronize. It’s easy enough to find two that talk to each other through the APIs they provide, but as soon as you need to add a third one into the mix, it becomes a complex puzzle where you are limited to the one that can talk to both. But what if this one doesn’t have all the features you need? Many of these apps have overlapping features, and you use only a small percentage of each app, but you have to pay for all the features. Because each has so many features, your employees have to navigate through the complexity of each app. These unused features turn into liabilities, not assets. This is a familiar story for many. What is the solution?
Why are startups like Amazon (department store), Uber (car service), Airbnb (accommodation booking), and Spotify (music store) so successful? Because they developed unique systems that allowed them to do what they do better than anyone else. This is why the vast majority of startups focus on building their own systems. They incorporate their unique philosophies, perspectives, values, processes, and strategies into their systems. You do not have to be a VC-backed startup to develop your own system.
A typical business today might use Trello to keep track of the to-do lists, Salesforce to keep track of their customers, MailChimp to send out newsletters, and Magento to sell products online. If you were to imagine building your own system that can compete with these apps, you would naturally give up. It’s true; it’s not realistic to think that you can build your own Salesforce. But when you carefully evaluate which features you are actually using in each application, you would realize that there aren’t many. What you need is not your own Salesforce but your own app with a few features from each of these apps. When you focus only on the features you need, you realize that it’s perfectly realistic to build your own system.
When you have one integrated system of your own, you only need to have one set of login and password. All the different parts of this system are aware of who you are and what other data you have. Your to-do list is aware of your customers’ contact information. The contact information is aware of what marketing emails you have sent to her in the past. And, the marketing email is aware of what she bought on your website. There are no redundant data.
Having your own system isn’t just about efficiency; it’s also about differentiating yourself from your competitors. The software companies who create these off-the-shelf apps profit by building one application and selling it to as many customers as possible. These apps incorporate the “best practices” of your industry, but if everyone is doing the same “best practices,” how do you expect to rise above them? Every business has something unique and special, but these systems cannot accommodate it. This means that, by using these off-the-shelf apps, you end up diminishing the aspects of your business that make you unique.
Suppose you have a marketing firm that specializes in acupuncturists. Generic marketing apps are not aware of the idiosyncrasies of the acupuncturists’ market (like their license requirements, insurance, demographic), so by using these apps, you end up operating just like any other marketing firm. Your own system could incorporate the idiosyncrasies and would allow you to operate in a way your competitors wouldn’t be able to.
Your business should not be a mere “user” of the system everyone else uses; you should be a “developer” of your own system. Today with all these technologies that allow us to find the best possible product at the lowest possible price in a matter of seconds, if you can’t differentiate yourself, you cannot survive.
If you have your own system, integrating third-party apps will also be easier. Some apps and platforms are too critical to give up. We can’t, for instance, take a feature out of Facebook because we cannot replicate their data. As mentioned above, if all of your apps are third-party, you are limited to those who can talk to each other through their APIs. We can’t demand these software companies to add the features you need. But if you have your own system, you can always make yours talk to any other third-party apps. Having one flexible piece of the puzzle at the center allows you to connect any other pieces you want. Even if one app goes out of business, it won’t have a cascading impact on the whole ecosystem of third-party apps. You can swap it out with another easily because all apps are connecting only to your system.
These days, we increasingly rely on these online systems to manage our business. Because the data is in the “cloud,” the work can be performed anywhere. Even if we commute to an office, most of the time, we are still working virtually anyway. These systems are gradually replacing our physical office space. Virtual office spaces have become more important than physical ones. Having a comprehensive and productive virtual office allows you to move and arrange your physical office flexibly. The same desk can be used by different employees. The employees can work from anywhere. They are not even tied to any specific computers. With this trend, you’d be better off investing more money in a virtual office space than in a physical one. Unless your business is retail, physical space will not give you any competitive advantage as more people are shifting towards working virtually.
For some nonprofits, a virtual office is becoming a necessity because many of them do not have an office. Most of their members and volunteers do not work for them full-time. The only way to coordinate their efforts is to do so virtually and asynchronously because trying to meet face-to-face would be nearly impossible given their time constraints. Investing in a system that allows their members to collaborate from anywhere anytime is a great use of their limited funds. In many cases, it eliminates the need to have a physical space.
Now even large corporations are allowing small groups of their employees to operate independently like startups, because they realize that, to survive in the digital world, they have to express their uniqueness. If you operate like everyone else by following the “best practices,” sooner or later, some startup will disrupt your business. You cannot forever rely on off-the-shelf apps; you need a system that allows you to be who you truly are.
Pokemon Go is now “the most successful mobile launch in history.” Its popularity caught most of us by surprise. A cultural phenomenon of this nature and magnitude will leave a lasting impact even if the game itself does not last. In this article, I would like to explore what this impact might be. What is the message of this medium called “augmented reality”?
When you distill what makes video games interesting, you realize there are only about a dozen different types of video games. The games under each type are just minor variations of the same game. But, every now and then, a truly new game is introduced to the market like Minecraft and Pokemon Go. Pokemon Go is a game worth playing if only to understand what makes “augmented reality” (AR) work and what its potentials are. But before we go there, I need to define what augmented reality is.
There are many “real-time” apps. An augmented reality app is essentially a real-time-and-place app. The conventional definition of “augmented reality,” however, differs from this. When people use the term in other contexts, they are generally referring to the visual effect of superimposing a computer-generated image on top of live imagery. Pokemon Go has this effect too but it is not what made the app popular. In fact, many players turn off this effect because it’s easier to play without it and because it does not subtract from the enjoyment of the game. I personally think it’s better without it. Despite the fact that this visual effect is not an essential part of the game, everyone is now describing Pokemon Go as an “augmented reality” game. So, I’m going to go along with this label and use the term loosely to refer to the overall technological framework used in Pokemon Go.
Pokemon Go is not the first AR game, but it’s the first one that made the technology work. It is reminiscent of the early days of PDAs where the majority of them flopped, including Apple’s Newton, until PalmPilot eventually figured out the formula that worked. Certain technologies are solutions looking for problems, and we realize this only after applying them to countless problems. I was not sure if AR would fall in the same category, but Pokemon Go has proven it to be otherwise.
In the famous “technology adoption life cycle”, there is a big gap (“chasm”) between “Early Adopters” and “Early Majority”. It takes a major catalyst to move a new technology across that chasm. For the Early Majority, the benefit of adopting a new technology must be substantial. I would argue that Augmented Reality, before this summer, was still in the “Innovators” phase, but Pokemon Go came out of nowhere to push it across the Early Adoptor phase; it has now crossed the chasm and reached the Early Majority. The novel idea coupled with nostalgic value made it compelling enough for the majority of people to invest their time learning how to use AR. This should allow other businesses to leverage AR without investing their own resources to cross the chasm.
What makes Pokemon Go intriguing is the idea of tying a virtual object to a specific physical location through the use of GPS. Whether we see this object superimposed on a live camera image or not is secondary. The appearance (“spawning”) of a monster is tied to a specific latitude and longitude as well as to a specific time. In order to catch this monster, you would have to be physically there at the right time.
Also, Pokemon Go is not a zero-sum game, so when a monster spawns at a specific location, everyone will see the same monster and be able to catch it. You wouldn’t need to fight over it. The reason why people tend to rush to these locations is because monsters disappear after about 15 minutes. The fact that it is not a zero-sum game makes it a social and collaborative game where players help each other out to achieve the same goal.
The most intriguing aspect of Pokemon Go is how it impacts the physical world. All innovative technologies will eventually impact our physical world, like how smartphones changed our lifestyle, but Pokemon Go is capable of specifying exactly which locations to impact. This is entirely new. For instance, Niantic, the creator of Pokemon Go, can decide to spawn a very rare Pokemon in a little-known town for one day and send thousands of people there. It can impact the economy of the town. Although there is no data available yet, retail businesses blessed with Pokemon Gyms (where players congregate to battle their Pokemon) will likely see an increase in revenue this summer. I’m pretty sure the food cart vendors in the southeast corner of Central Park in New York City (the most popular spot for Pokemon hunting) saw a significant boost in sales since Pokemon Go launched.
Being able to target a specific physical location opens many new possibilities. If I were to draw an analogy to a game of pool, the impact of technology on the physical world before Pokemon Go was like the opening shot where you have no idea which balls would go into which pockets. Pokemon Go can now call every shot. But with power, comes responsibility.
The issues Niantic is now dealing with are so new that they must feel like they are exploring the Wild West. The placement of rare Pokemon cannot be purely random because doing so can cause public hazards. There have been some complaints in the media about racism implied in how Niantic placed Pokestops where players can collect certain rewards. In dangerous areas, there are noticeably fewer Pokestops. If I were in charge of placing them, I would probably do the same also. But dangerous neighborhoods in the US tend to have a higher density of black people, which means the density of Pokestops would correlate to the density of white people. In some sense, it is indeed racist, but what would be the correct answer? If Niantic ignored the safety of Pokestop locations, they could encourage people to unwittingly walk into dangerous areas. Suddenly software engineers are dealing with geopolitical problems.
Another problem that I had never heard of before Pokemon Go is “GPS spoofing.” Some people have figured out a way to fake the GPS data on their phones in order to make the app believe they are walking around when in fact they are sitting on their couches playing the game. This became a big problem because those who are spoofing have a huge advantage over the legitimate players in the game, which in turn can ruin some aspects of the game for everyone. Niantic then had to figure out how to detect GPS-spoofing. As more AR apps become popular, we will increasingly rely on the authenticity of GPS data to interact with one another. I believe Apple and Google will be pressured to make GPS-spoofing more difficult as we begin to use GPS data as a form of authentication.
At a more abstract level, what is interesting about Pokemon Go is that it re-embodies our increasingly disembodied life. The majority of us spend the majority of our waking life staring at screens. What we see on the screen, for the most part, has nothing to do with what is going on outside of the screen. For my own business, I could be anywhere in the world as long as I have access to the Internet. Our experience within the screen is disembodied. Since Pokemon Go came out, many people have criticized the game as just another way for people to be glued to screens, but compared to other apps, it is an improvement. The vast majority of apps are entirely disembodied; what they do have no relation to the physical surroundings of the users. When we are texting on the street, for instance, we have no reason to take our eyes off the screen unless we have to. AR reconnects us to our physical surroundings. Since I started playing Pokemon Go, I’ve traveled to many places I have never been to before. Many of these places are in my own neighborhood; I just never had any reason to go there before. I have discovered many interesting things I was not aware of before.
As popular as Pokemon Go is now, if the premise of the game remains as is, it may not last long. It would depend on how it’s going to evolve in the near future. What they have released so far is basically a minimum viable product. There are some obvious features missing like trading and social networking. The reason why Minecraft remains so popular is because it has become a generalized platform to express creativity like LEGO. For Pokemon Go to sustain its popularity, it too should become more of a platform where the players can define their own goals or purposes.
As of today, I see a major problem with the premise of Pokemon Go. The “gyms” in the game where you pit your Pokemon against others are already dominated by high-level players. At the rate everyone is leveling up, in a few months, anyone just starting to play would never be able to catch up. The battling in the gyms is not necessary to enjoy the game but it is still a major component of it. If the new players have no hope of becoming competitive at the gym, they would be discouraged from joining the game. If it cannot attract new players, the overall number of active users will decline over time.
If not Pokemon Go, some other app, I believe, will eventually dominate the AR space. There are many compelling use cases of AR, but nobody has yet to successfully implement them. For instance, let’s say you hear police sirens all around you but you have no idea why. How can you find out what’s going on? Surprisingly, there is no real-time-and-place app you can use for this purpose. You could search on Twitter to see if anyone is talking about it, but it would be hard to find because Twitter is global. (It’s real-time but not real-place.) Figuring out the right word to search for would be almost impossible for this particular example. You could post the question on Facebook, like “I hear lots of sirens. Does anyone know what’s going on?” But most of your friends do not live in your neighborhood, so only a small number of them would be able to respond. Furthermore, because of Facebook’s algorithm, unless a lot of your friends start liking your post, it wouldn’t reach most of your friends. By the time your friends see your post, it would be too late (a day or two later). The best you can do is to call a friend who lives near you. The strength of Facebook is the authenticity of the users (real-people), not real-time or real-place.
But with today’s technology, this shouldn’t be the case. If you live in a city, there are thousands of people around you. Some of them must have the answer you are looking for. The only problem is that you don’t know them, and there is no way to reach them. This is where an AR app can help. What we need is an app that allows us to form an ad hoc community based on time and location. This happens in Pokemon Go when a rare Pokemon spawns and many people start running towards it. At the spawn location, a temporary community is formed with the shared purpose of catching this rare Pokemon.
Such an app would come in handy not only for communicating with others around you during emergency situations (like fire, earthquake, storm, loud noise, sirens/alarms, power outage), but also for music events, sports events, parades, and street fairs. Adding a peer communication mechanism centered on time and place can augment the reality and make it more enjoyable.
Here is another common scenario. Suppose you are about to throw away a table you no longer need. It’s a pain to dispose of it properly, and it is also a shame to dump it if the table is still fully functional. And, you know that there must be many people near you who would love to have your table. You could post it on Craig’s List, but it could take a week before you find someone to take it. Wouldn’t it be nice if you can post it on something like Pokemon Go where your table “spawns” for others to see, and if they are interested, they could come pick it up right now?
Right now, there are only virtual monsters in Pokemon Go, but I could imagine businesses giving out sample products through Pokemon Go. It can also work like Priceline.com where businesses can try to sell leftover products at the last minute. Because we cannot predict where and what products would suddenly become available at deep discounts, it won’t affect their regular prices.
These ideas are not new. There have been many attempts to make them work, but nobody has been able to package the solution in a compelling way. There are many reasons, like privacy concerns about pinpointing locations of the users and people’s resistance to communicating with strangers. But the biggest challenge is solving the chicken-or-egg problem of building any type of social network. People would not join a social network if there is nobody on it, but how can it grow if nobody wants to join?
These days, we use search engines to find anything and we take that for granted, but it took many years for people to realize that they can search for, say, “pizzeria” to find the nearest pizzeria to them. People had to be trained to think of search engine that way. By the same token, in order for an AR app to be useful in the ways I’m describing above, enough people will have to be trained to think of using an AR app to find what they want here and now. Until then, we couldn’t have a critical mass of users to make it work. This summer, Pokemon Go has pushed us closer to it. Now other apps can piggyback on their success.
Through corporatization, modern conveniences, formation of nuclear families, and labor mobility, we have destroyed our local communities. Most of us hardly know our neighbors. Even though we can stay in touch with hundreds or even thousands of people online, we still feel lonely because the concept of “friend” has been disembodied. In the old days, if we hear a siren, we could have stuck our heads out of the windows and talked to our neighbors to find what’s going on. We no longer have this type of connection with the people around us. In playing Pokemon Go, I see a glimpse of the future where AR brings us back together like how Jane Jacobs wanted our cities to be.
Then read on.
You’ve been hearing a lot of negative things about WordPress from your computer-savvy friends but then everyone around you is using it. What should you do? True, most programmers hate WordPress, especially those who are highly educated in computer science. Well, that is not without valid reasons. So, let me start with listing some of the problems associated with WordPress.
WordPress has been around for a long time (about 13 years) and it has always kept its backward compatibility (or upgrade-ability), which means it carries over many engineering techniques and concepts that are now outdated. It is also bloated with features that most people do not need which makes the platform heavy and slow. If you wrote from scratch only the features you actually need, you would probably end up with less than 10% of the code. To make this problem worse, WordPress is popular among those who only dabble in programming (or not at all), so they try to solve everything with plugins. A typical WordPress site (that I’ve personally come across) has about a dozen plugins. The menu bar for the administration site has a long list of tabs to sift through which makes it hard to use. Managing their compatibility and dependencies becomes a headache too. You might not be able to upgrade your WordPress because some of the plugins might not be compatible with the latest. And, not being able to upgrade is a serious problem with WordPress because WordPress (or any popular applications) is a magnet for hackers.
Sometimes the value of a product comes strictly from its popularity (like Windows or the English language). Popularity means a lot of people already know how to use it. You can save time and money in training the users. Furthermore, when something is popular, people question it less. As a developer, even if you deliver a product simpler and easier to use than WordPress, your users will always run into some problems. When that happens, if your product is not popular, you get blamed quite unfairly (because you made it!). In contrast, with WordPress, you can just shrug off and say, “Well, sorry, WordPress can’t do that.” Your clients can take it up with the most popular content management system on the planet. In other words, if you are going to deliver something custom when there is a popular alternative, you’d better make sure there is a damn good reason for going custom. Only exception to this is when your client is capable of appreciating the finer points of content management system.
The popularity is also beneficial for the clients who do not want to be locked into a particular vendor to maintain the site. Since there are so many developers who know WordPress, finding a different vendor to manage your site should be relatively easy.
WordPress is great if what you need is strictly content management, and you see no reason to expand beyond it. Computer science majors tend to love using complex solutions to solve simple problems. If what they need to build is a blog or online magazine, then they should swallow their pride and use WordPress. They should save their talent and intelligence for a more meaningful problem. Also, keep in mind that no website should be expected to last longer than five years. So, just think about what is necessary for the next five years. A beautifully and thoughtfully designed system can last ten or more years but is that what you really need for your project?
If you or your clients are happy with the way any template/theme looks, then use it as it is intended. By that, I mean, don’t try to make it do what it is not designed to do. Most WordPress themes allow you to customize colors and fonts, add or remove features, choose different layout options, and insert your own logos and artworks. If what you want can be all achieved through the interfaces provided, then using a theme is a good idea. If you want the power to control beyond that (e.g., you want to make this column wider, or this line thicker, or change the color of just this element), then forget using a theme. It would be a frustrating and time-consuming endeavor. Because you would want to avoid modifying the code directly (for future upgrade-ability), you will feel like you are playing one of those coin-operated crane machines to get plush toys; you want to reach in and grab the toy with your hands but you can’t. Even if you decide to modify the code, most of these themes and plugins are rather complex because they need to accommodate everyone’s needs, and also need to interface with WordPress. Trying to sift through the source files and deciphering what’s going on is time-consuming. It ends up taking longer than if you wrote your own theme or plugin from scratch.
If you deeply care about design and want to control all the details, then don’t use any themes even if some of them look very similar to what you want. Sooner or later, you will get stuck, and feel frustrated that you cannot move this element a few pixels to the left.
We generally start with almost nothing in the theme directory. (Just index.php, functions.php, and style.css) We then add only the features the client needs. Most of the features can be added by surgically copying and pasting (transplanting) from one of the default themes of WordPress.
And, we avoid using plugins unless they are absolutely necessary. We typically end up with a few in the end. (We almost always use Super Cache because, otherwise, WordPress is too slow.) By writing your own theme, you would have much less need for plugins. This makes the admin site easier to use and maintain. In addition, because plugins are where a lot of security vulnerabilities are found, using less makes the site more secure.
I hope I provided some answers to solve your conundrum. Don’t be embarrassed by using WordPress. What matters is using the right tool for your job.
The younger generations are generally savvier with the written forms of communication like texting, email, and social media, because they grew up with them. The older generations had to learn them as their second language. I'm wondering: Are the younger generations better at avoiding miscommunication when they use these digital mediums?
I have a feeling that they are better because they start making mistakes much earlier (I see them with my 11-year old daughter), so they should have a better sense of what to do and what not to do. Naturally, they will still make many mistakes but I have a feeling that they make less obvious mistakes than the older generations do.
I think the older generations automatically blame the medium for being prone to miscommunication—the main argument being that it lacks the visual and emotional cues—instead of blaming themselves for the lack of experience. By blaming the medium, they become even less savvy. And, the confirmation bias convinces them that face-to-face or phone is a superior medium of communication. Meanwhile, the younger generations get better and better at using the written forms of communication. So the gap widens continually.
In terms of efficiency, written forms of communication have numerous advantages over the spoken forms. If you express an idea once in writing, it can potentially be shared with millions of people. You wouldn't have to repeat yourself over and over. The same reader can re-read it to make sure he understood it correctly. What we communicated does not get lost or distorted in our memories because we have a copy. We do not need to disturb or distract anyone in the middle of doing something important because written mediums are asynchronous (the writer and the reader can communicate on their own schedule and at their own pace). And so on...
This is not to say that face-to-face communication or phone conversations should be avoided. They do have their own pros and cons. My main concern here is that the older generations are blaming the wrong thing. The written forms of communication are not necessarily more prone to misunderstanding; it's their lack of experience that leads to misunderstanding. If so, the older generations should be compensating for their lack of experience by eagerly embracing these online mediums.
If your native language is English and French is your second, you will naturally encounter more instances of miscommunication when you speak French, but it does not mean that the French language is more prone to miscommunication.
There is now a lot of chatter about “progressive web apps.” What is it? It’s a way to build an app using the tools of website development. The two most popular frameworks for creating progressive web apps are Ionic and React. (Ionic is built on AngularJS which is backed by Google. React is backed by Facebook.) The most obvious reason why this approach is attractive for anyone considering creating an app is cost. These frameworks allow you to write once and deploy it to both iOS and Android, which means you would not have to hire and manage two separate teams. But the more important reason, I would argue, is agility.
In the end, user experience is always better with native apps (apps written in the native language of each platform) but this superiority is experienced only in the aspects related to performance, not in the design of the user experience. Native apps run quicker and smoother overall, which certainly has an impact on the user experience, but this is an easy part of creating a better user experience. It’s like buying a more powerful computer to enhance user experience. Naturally, your experience is better with a faster computer but you do not have to be smart, talented, or creative to buy a faster computer. It is wiser to tackle the harder and more uncertain problems first. In the early stages of app development, your focus should be on the design of the user experience. If the design is superior, you can always easily enhance it by writing it natively.
So, what would allow you to design a great user experience? Agility, that is, the ability to iterate the design as many times as possible given the same amount of time and budget. For this objective, progressive web apps are far superior to native apps. A popular mantra among user experience designers is “You are not your user.” What it implies is that you should not assume you know what your users want. Your job as a user experience designer is to know how to discover what the users want. You are the expert of the process of finding the answer. It is not your job to know in advance what the answer is. The arrogance implied in the latter can mislead you.
If you are on board with this philosophy, you should be looking for ways to test your design as frequently as possible. With native apps, this is very difficult because there are many hurdles to putting the latest iteration in the users’ hands. Conventional apps must be distributed through app stores, which means they need to be reviewed and approved by a party you have no control over. Furthermore, even if they are approved, the users still have to download and install the latest updates. This whole cycle can take weeks. And, you always have to deal with users running on different versions of your app, which complicates the process of getting their feedback. You would find yourself constantly asking the users, "Have you installed the latest update?"
Progressive web apps eliminate these problems. Because a progressive app can run on a conventional browser, in the early stages, you can simply serve it as a website on your own web server. Without going through app stores or the process of installation, anyone can immediately use your app. And, when you make a change, everyone gets it immediately and simultaneously. You can make as many changes as you want within a day. If a user complains about a particular design, you could even tweak it while he is still on the phone. This is powerful in the early stages of development when your app must evolve very quickly. Once the app matures in design, is proven to be a success, and is generating income, then you can safely allocate a budget to write the native versions of the app.
The success of certain apps, like 3D video games, is heavily dependent on the performance, but these apps are relatively rare. If the progressive version of your app is not successful, it is highly unlikely that it would succeed even if you create a native version of it. So, by choosing to go native from the start, you would be severely limiting your ability to realize the full potential of your app.
One legitimate concern, however, is the use of hardware-specific or platform-specific features like push notifications, GPS, camera, and accelerometer. But many of these features are now accessible from mobile browsers, and more are being added by Apple and Google. (Google is currently ahead of Apple in this area.)
If you are porting an already successful app, it would make sense to go straight to native, but if you are developing a brand new app, the progressive approach provides the flexibility you need to arrive at the magic formula that users want. After that, you can safely go native without wasting time or money.
Jiro Dreams of Sushi is a documentary about a tiny sushi bar in Tokyo with three Michelin stars. The owner/chef Jiro Sukiyabashi relentlessly pursues perfection in sushi at age 85. The documentary is a fascinating look at the personality and the process that go into serving the best sushi in the world. Since I watched the film, I have frequently thought about his model for achieving perfection and have come to realize that it is quite relevant for agency business, particularly for design agencies.
Jiro’s restaurant has only ten seats, and Jiro himself interfaces with all the customers. The documentary reveals what goes into preparing all the ingredients behind the scenes by a team of people. It becomes clear that the assembly that happens in front of the customers is just a final touch. (For instance, the octopus is massaged by hand for 45 minutes.) Now, why does he have to serve the customers himself? If perfection is about his products, it shouldn’t matter who serves them. In fact, he wouldn’t even need a restaurant; he could just perfect his sushi in his basement alone, like many artists do.
When speaking of perfection, we tend to focus on products, but I believe Jiro is reaching for perfection in something else: customer experience. In other words, he is a performance artist, and his restaurant is a theatre. His products are just part of his theatrical experience.
This is why I feel his model is relevant to agency business. Agencies too are usually evaluated by the products we deliver. For instance, no design awards (that I know of) evaluate client experience; they only look at the final products. But I personally believe that when we focus on products, we are missing the point of what agencies are about. We must be careful of what we measure because it influences people's behavior. When we misunderstand the true nature of our business, we tend to invite unnecessary conflicts. Even for our own sense of contentment, it is important to understand it properly. This understanding came late in my career, and I wish I had realized it earlier.
Products are indeed important, but agency business is ultimately about customer (or client) experience. The value we provide is in knowing how to manage the process to ensure a positive experience in achieving the goal. Clients hire us because trying to execute their projects themselves would not be a positive experience. Agency business is for those who are passionate about customer experience first and foremost. Thinking of it as performance art, not as object art, would yield a better experience for both parties.
This is why, I believe, after many decades of running a sushi bar, Jiro still handles customer interaction himself; everything else has been delegated to others. Similarly, in the agency business, owners and partners should have intimate knowledge of the products and how to create them but focus on client interaction.
This is not to say that agency business is all about being nice to clients. Performance art is a collaboration between the artist and the audience. Art is different from entertainment in that the audience is also expected to step up to the plate. Entertainment is a passive activity where the audience is fed what they want to see or hear. Art requires active involvement from the audience. They can’t just sit back and relax. Agency work is the same way. As the famed designer Milton Glaser once said, “Extraordinary work is done for extraordinary clients.” This is because agency business is not about the product, as misleading as it may be; it’s about the dance we perform with the clients.
Choosing the right framework to build an application is one of the most challenging tasks as a CTO. It requires you to have a wide range of experience in managing application development. Many years of experience pays off.
One of the most important questions you have to ask is how conventional your application is going to be, because one of the key concepts that differentiates these frameworks is “convention over configuration” (CoC). For instance, if you are building a blog, your project is obviously very conventional. If so, you would definitely want to choose a CoC framework like Ruby on Rails and CakePHP. A CoC framework will take care of the majority of the work for you; you would only have to manually code the unconventional parts.
But if your app is unconventional (imagine building a system for Uber), you would want to consider other frameworks like Symfony. (Apparently, Symfony was originally designed as a CoC framework but, from 2.0, they moved in the opposite direction.) It is a highly flexible framework that does not make any assumptions about what you want to do or how you want to do them. It basically has no opinions of how you should solve your problems.
This is good and bad. If you are an inexperienced developer who is not aware of the best practices, using such an open-ended framework could spell disastrous because you would likely choose suboptimal or even wrong ways of implementing common features (because you would not know that there are better ways of doing the same things). But if you are an experienced developer who disagrees with the best practices, CoC frameworks would be annoying because they would impose certain ways of doing things that you don't like. For this reason, CoC is a good framework for beginners because they can learn how to do things correctly by using CoC frameworks.
CoC frameworks are also advantageous with respect to the cost of hiring developers. Because CoC enforces certain ways of doing things, you do not need to hire expensive senior developers. You can hire junior developers and you wouldn't have to worry about them messing up your code or solving simple problems in convoluted ways.
The type of business you have also influences the choice. If you are an interactive agency who is building many applications for many different clients, you would be better off using a CoC framework because the majority of them would be conventional, and have a lot in common. You would want the CoC framework to automatically take care of the repetitive aspects for you.
But if you are a startup who is going to be working on the same app for years, you would be better off investing more time up front on “configuration” so that you would have more flexibility later (so that you wouldn't be limited by the conventions that it imposes). However, if you are creating a prototype or MVP to be discarded shortly, you would be better off using a CoC framework since the up-front investment to gain flexibility later would be a waste in that case.
If you want to develop an audience who consumes your content on a regular basis, your website, YouTube channel, or Facebook Page can’t just be about providing pieces of knowledge. This becomes clear when you study Bob Ross’ painting videos. The vast majority of his audience never painted; they just liked watching him paint. Likewise, a cooking show too has to have value by itself even if the audience never cooks. Regardless of the type of content, there has to be some redeeming quality to the act of reading or watching it.
“Cooking with Dog” has over a million subscribers. I’m pretty sure the vast majority of those subscribers have never cooked anything they saw in the videos. Those videos are just funny to watch. I think this is also true with many popular podcast producers and radio show hosts; the quality of their voice might be a more significant factor in their popularity than what they are saying.
If people want to know how to cook gnocchi, for instance, they would search “gnocchi” on YouTube. Once they learn what they need to know, they would discard it. They wouldn’t care about the other videos you might have in your channel. When people search anything on the Web, it’s about the knowledge they seek. They are not looking to be a regular audience unless the content has something more than just the piece of knowledge they were looking for.
It’s actually worse than that. Searchers have specific goals they want to achieve. Searching is just a step in that process, so even if your website has more than just the answers they seek, they are not in a mental state to check out anything else you have; they need to go back to completing their goals. You are catching them at the wrong time.
If you want to make money from advertising, you would need to build an audience. You cannot just be a destination for search results. Searchers are not loyal. They just get what they want and leave. I think this is why social networking sites are better platforms to promote content.
This post about “The Sad State of Web Development” is pretty funny. I agree. I think the ultimate dream of all computer science majors is to create their own frameworks because it would earn the highest respect among their peers. Why? Because the problems they were taught to solve in school are the problems of computer science. They didn’t study the problems in, say, medicine, economics, or education. By solving the problems in education, for instance, they wouldn’t get peer respect because their peers are not educators. Everyone wants to be respected by their own peers. This type of specialization creates a conflict of interest. The interests of computer engineers often do not align with the interests of the business because they live in their own world with different values.
It’s about time the business world realizes that these latest greatest technologies don’t provide the ROI they are looking for. It’s not worth investing in these trendy technologies for the bragging rights. At this point, whatever frameworks and languages we have available are good enough for 99% of the problems we need to solve. The new ones are just solutions looking for problems. Since these problems don’t actually exist, these engineers are using them to build rather mundane things like blogs, todo lists, and photo galleries. They are indeed “over engineered.” It’s like using an iPhone as a hammer. As you could imagine, an iPhone would make a lousy hammer.
The way people think about aesthetics in business have fundamentally and irreversibly shifted in the last decade or so. The shift started with the advent of online advertising. For the first time in history, advertisers can quantify the return on their investment, and get detailed data back to examine what worked and what didn’t. It was a paradigm shift in the so-called “creative” business.
Here, I want to be extra clear about what I mean by “aesthetics”, “design”, or “creatives” as these words can mean a lot of different things. A design for a product can have a functional design and aesthetic design. In an ideal situation, they are synthesized together, but in many cases, they are separate where the product doesn’t work well but still looks beautiful, like a chair that looks like a piece of sculpture but is uncomfortable to sit in. Below, I’m discussing only the aesthetic part.
I would also like to include creative ideas that are funny, smart, shocking, or sad. Super Bowl ads are good examples. They are stunningly beautiful, hilarious, tear-inducing, or shocking, but it does not necessarily mean that they can affect the bottom line for their clients. People might remember the TV commercials but forget the products or the companies that they were trying to promote. The customers might say, “It was entertaining, thank you, but I’ll buy it elsewhere.”
In the days without the Internet, it was not possible to quantify the effectiveness of any given creative direction. So, even if it were a flop, we could have still argued that it was a success as long as enough people said it was funny or beautiful. And, ironically, the ultimate judge of these creative campaigns were often other creatives in the same industry, not the consumers, because there were no practical ways to get the data back from the latter. Even if the business boomed for the client, it was hard to pinpoint what exactly contributed to the success.
Now everything can be measured with strategies like A/B testing. We “creatives” have no excuse. And, what we are learning is that high aesthetics doesn’t really pay off. This is a tough reality for us to swallow. It goes against everything we believed in. There are websites that present the results of A/B tests where you are asked to guess which one performed better. It’s surprisingly hard to guess because there seems to be no rhyme or reason. The one with a better design does not necessarily win. Sometimes the one that is indisputably ugly wins. In some ways, we could say that the myth, or the mystique, of “creatives” has been busted.
Everything in life has a point of diminishing returns. For instance, beyond $20 a bottle, spending $20 more wouldn’t get you anything noticeably better. If you spend more than $100 on a bottle, the differences become almost entirely subjective. From the point of view of ROI (return on investment), there is a sweet spot somewhere in-between, and now with everything becoming trackable, this sweet spot can be mathematically calculated. And, unfortunately for us creatives, this sweet spot is nowhere near as high as we had assumed or would like it to be.
This busting of the myth had a tangible impact on the creative industry. In the last ten years or so, I’ve witnessed some amazingly talented people lose their jobs or switch their careers entirely. Many young people who studied graphic design in college are unable to get jobs as graphic designers. The other day, I was talking to a friend of mine who was lamenting the fact that we used to be able to charge $1,000 for a logo even as a small design studio; now people expect to pay $100 or even less. The websites like 99designs.com have destroyed this market since you can always find someone somewhere in the world who is willing to design a logo for next to nothing simply because it’s fun for him.
But ultimately it’s not the fault of these websites. Although the vast majority of people are not consciously thinking about the fundamental shift we are discussing here, they got the message instinctively. The clients who are still willing to spend a lot of money for high aesthetics are not necessarily smart either. In my observation, most of them want high aesthetics for themselves. It’s like people who love having impeccably designed offices; they know their beautiful interior design is not going to contribute to their bottom line, but they want it for themselves. In some cases, high aesthetics can scare customers away because they might assume that the price would be inflated just for having the beautiful office. Wiser clients might ask, “Why should I pay a higher price just so that these people can work in a beautiful office?”
Now we live in a world where creatives have to be accountable for what we do. We can’t keep producing a hundred dollar bottles of wine. The true need for them is very rare. The sweet spot is somewhere in the range of what we call “good enough.” If measurable metrics is what we need to go after, we creatives are better off focusing on other types of design like usability which can have a more material impact on the bottom line.
So, what are we to do now?
If high aesthetics is not what we should be focused on, we might be tempted to assume that a more logical, as opposed to emotional or psychological, approach is better, but this is not true. Science cannot solve these problems because we are at the same time the observer and the observed. In science, the separation between the two is essential. If the observer can affect the behavior of the observed, everything we go after would be a moving target, and we could never arrive at the solution. Even if we succeed in coming up with a formula that reliably and predictably work, it wouldn’t take so long before it impacts the market. Since no formula takes into account its own impact, it would break sooner or later. Finding these formulas that work temporarily is the best we can do. It’s no longer about doing what we personally like and finding clients who are willing to pay for it. It’s about using our instincts to find the best “product-market fit” as quickly as possible. And, finding it requires creativity in a more basic sense of the word.
I’ve always had a problem with the word “creative” as used in the creative industry because most people in it are not any more creative than those in other industries like accounting. Creativity is an ability to make unexpected connections that add value to our society. This definition of creativity can be applied to any industry equally. Einstein, for instance, was exceptionally creative even though, I’m pretty sure, he couldn’t design a decent business card.
This is where we need to act like Luke Skywalker; we need to use the Force, trust our instincts to increase the odds of hitting the target. Logic is a tool that allows us to reach our goals quicker, so that we can try more different ideas. The speed becomes critical here because nobody can consistently predict the solution that works. All that we can do is to manage the probability statistically. The problems we need to solve are always going to be in a low-data-high-uncertainty environment until we actually execute the ideas. In such an environment, our instincts are still the fastest and the only solution.
This is what Marshall McLuhan meant when he said “the medium is the message.” The accurately quantifiable nature of the Internet is changing the message. It is no longer a monologue, one way communication, from the supposed authorities of “creative” ideas to the masses. As the mass media have become two-way, the content has become much more of a dialogue.
When people talk about “digital divide”, they are usually referring to the lack of access to technology, especially to the Internet, for the underprivileged. Although this problem still exists in the US, it is continually improving, and has become a lesser concern. The new digital divide in the US has to do with how we use information technologies. Even between two people who have access to the same technologies, a significant difference can be found in what they are able to do with them. To bridge this gap, we need to figure out what creates this gap.
The roles technologies play in our lives are rapidly expanding. Since we get paid for our productivity relative to others, if we do not keep up with the socially expected level of productivity in our personal lives, it has significant consequences in our professional lives also. Take for instance, something as trivial as grocery shopping. Services like FreshDirect make the process much more efficient. Not only you save time in traveling to a supermarket, the process of identifying what you need is also streamlined as they retain records of what you frequently purchase. If you are good at using such services, grocery shopping can be completed within ten minutes total. Without technology, this whole process can easily take over an hour.
Another good example is a website that reports the real-time locations of busses in New York City. If you know how to use it, you would not waste time waiting for a bus. Otherwise, you could be standing at the bus stop for half an hour.
If you have a smart-lock on your door, you would be able to let your delivery person into your home even when you are not home, which could save you time going to a pick-up location yourself.
These small differences can easily add up to be a significant difference overall. The disparity would be quite obvious if we were to compare an accountant who refuses to use a computer with the ordinary accountants today. The number of clients and jobs that he can manage would only be a small fraction of the market average today. He wouldn’t be able to make enough money to survive. Whether you want to use technologies or not is not up to you to decide. Digital divide is indeed a socio-economic problem.
Almost any tool of productivity we use was a piece of “technology” at some point in the human history. Knife, frying pan, match, pencil, light bulb, bicycle, telephone, etc., etc.. But we no longer think of them as a “technologies” because they have already become seamless extensions of ourselves. We use them almost unconsciously as if they are part of our own bodies. From this perspective, the word “technology” is referring to any tool of productivity we have not mastered. Once we master it, it ceases to be a “technology.” The word does not refer to any specific object; it refers to the friction or hurdle in mastering it.
If we are to agree on this definition, we can solve the problem in one of two ways: 1) teach people how to cope with the friction, or 2) remove / reduce the friction.
A typical way that we teach people how to use a piece of technology is to explain how it works. Engineers love understanding how things work. This understanding of the underlying mechanism is what allows them to learn new technologies quickly. Because of this, they assume that teaching people how things work is the best way to help them master their tools. In other words, in order to bridge the gap, they are training people to be more like engineers. But how about the other way around? Why not train engineers to be more like non-engineers?
Take iPhone for instance. iPhone was not originally designed for children but any parents with toddlers would immediately notice that they can use iPhones without training them. Functionally speaking, an iPhone is just another general-purpose computer; the only difference is the interface. Because the friction is so low, and because its user-interface design piggybacks on our innate understanding of the physical world, toddlers who don’t even know the word “technology” can use it.
In other words, we do not need to understand how things work in order to master them. After all, how many people truly understand how televisions work? If there is a way to remove the friction, education and training are not necessary.
Some people are simply not interested in how things work, and only interested in what they can do with them. Understanding how everything works does not necessarily lead to higher productivity, as you could get caught up in the details and lose sight of the bigger picture. An interesting research conducted by Enrico Ferro, Natalie C. Helbig, and J. Ramon Gil-Garcia shows that the level of computer literacy does not correlate to employment. In fact, from “basic users” to “advanced users”, the unemployment rate slightly increased.
This makes intuitive sense to me. I’ve met many people whose passion is to custom-build the fastest computers possible. They frequently talk about different kinds of CPUs, cooling systems, drives, RAMs, etc., and brag about how fast their machines are, but when I ask them what they do with their super-computers, a typical answer I get is “run a benchmark app” to measure how fast they are. It is no surprise to me why the unemployment rate doesn’t change for “advanced users” despite all the talk about STEM (Science, Technology, Engineering, and Math) jobs.
The so-called “DIY” or Do-It-Yourself market is filled with people like this who do not take into account whether they are making productive use of their own time, because they love figuring out how things work for its own sake. If we were to look at the big picture, they are no more “intelligent” than those who don’t understand the mechanics, yet in our culture they are perceived as the smart ones. Because of this, it does not occur to non-engineers to blame the engineers for creating something that requires training. When we observe people who are struggling to use these technologies, we immediately assume that we need to train these users, not the engineers.
In order to bridge the digital divide, we need to change our cultural perception of “intelligence”. Because we perceive engineers to be the smart ones, we are not putting enough pressures on them to make smarter products that do not require training. Instead, almost by default, we pressure the users.
Educating and training those who lack “digital literacy” is sort of like trying to teach everyone how to use professional cameras. Photography used to be so technically complex that people had to be technically trained to be photographers. Today, technical “literacy” in photography is no longer necessary. As technologies mature, the need for education and training diminishes, or entirely disappears.
Driving a car still requires a significant amount of education and training, but it has gotten much easier with automatic transmission and power-steering. There is no point in requiring people to learn how to drive a stick-shift. In the future, cars will drive themselves, and we would no longer need to learn how to drive.
At the rate technologies are evolving today, the only function of “digital literacy” is to tide people over until the technologies mature. The temporary nature of such “literacy” raises a question: Can “digital literacy” be legitimately compared to literacy proper? The subjects we learn in elementary school, like language and math, eventually become part of our cognitive frameworks. They can be flexibly used to understand what we experience. They are tools for life. “Digital literacy” in contrast is a temporary coping mechanism like learning how to drive a stick-shift.
Before graphical user interface was introduced, computers required a high level of digital literacy. Now toddlers can use them given an interface like that of iPhones. So the interesting question is whether it is worthwhile to teach “digital literacy” if this “literacy” so quickly becomes worthless. Incidentally Steve Jobs didn’t let his kids use iPads. We don’t have an answer to this question, but what is worthwhile for sure is to remove as much friction as possible by designing better products and offering better services.
Many startups have this strategy: “Let’s get as many users as possible, and worry about monetizing later.” And, some startups have the opposite problem where their monetization strategy is sound but cannot build enough user traction. I use an analogy of building a campfire to describe these problems. Square, for instance, was able to build user traction very quickly because their card reader was revolutionary. They struck the fire-starter rod once, and the tinder (shaved pieces of wood or pieces of paper) caught on fire immediately. It started spreading quickly to the kindling (small pieces of wood, twigs, or branches), but they had no fuel wood (large blocks of wood). It looks like they didn’t even think about where to get the fuel wood from when they launched the company. Other financial firms like Chase and Citibank have introduced their own mobile card readers. They have plenty of fuel wood as traditional banks. Now Square’s kindling is burning out, and their flames are getting smaller. If they don’t find the fuel wood soon, their campfire will go out completely.
My own site, AllLookSame.com, went viral a week after it launched in 2001, and amassed 2 million users since then. But I had no plan to monetize the site, as it was mostly a joke, so I couldn’t think of a way to monetize it after the fact. The fuel wood was entirely missing.
A more interesting case to consider is the opposite one where the startup has a solid business plan/model for getting the fuel wood, but have no kindling. It’s easy enough for most startups to light up the tinder, especially if they have investor money. A clever PR stunt, appealing to friends and families, advertising, and mass-emailing could light up the tinder, but if they are missing the kindling, tinder would not be able to burn long enough to heat up the large blocks of wood to a burning temperature. Some startups had a viable business model, but couldn’t bridge the gap between tinder and fuel wood. They ran out of money and died.
Not having the kindling in place is a problem because the tinder burns very quickly. (Not having fuel wood is fine as long as you have a viable plan for getting it later.) If the kindling is not already in place when you light up the tinder, the tinder will simply burn out. You would just waste the tinder and your effort to light it up. You have to go find more tinder again, and try lighting it.
Your kindling has to have some viral factor to grow beyond the people who were directly triggered by you. And, for the kindling to have a viral factor, it needs to have low friction. If it costs a lot of money to try it for the first time, it wouldn’t go viral (unless the proposition is exceptionally compelling). Most people would be willing to try something new if the price is less than $10. Even if it turns out to be bad, $10 is all they would lose. Convincing someone to try something that costs $100 or more, on the other hand, is no easy feat. Many startups lose money in their initial transactions in order to lower the price sufficiently, or to make it completely free for the first-time customers. VC-backed startups can afford to do this for a while to build traction.
Dropbox offering more storage for referring friends was a kindling strategy. Without that, their tinder would probably have burnt out. And, they also had a fuel wood strategy in place from the start (to charge heavy users).
In my view, it’s miraculous that Airbnb succeeded. Not only that the price of trying Airbnb for the first time is high, but the anxiety level is high too (the idea of sleeping over in a complete stranger’s home). I’ve heard that many well-known investors turned them down even though they had a chance to invest early on. In my analogy, they actually succeeded in going from tinder to fuel wood without a kindling strategy, but it took two years for the fuel wood to start burning. It’s as if the three co-founders kept lighting matches for two years under the fuel wood. eBay is also a two-sided marketplace like Airbnb but there were plenty of low priced items on eBay, which made it easy for people to try without risking much money (or their own safety).
LinkedIn was a case where the kindling strategy appeared to be missing but existed. Social networks have very little value to the users until a critical mass is reached. A social network for business is particularly problematic because people wouldn’t post silly, casual things like they do on Facebook. So, how did LinkedIn go from tinder to fuel wood? I think my own experience with LinkedIn is typical. When it first launched, I was curious enough to try it and set up my profile on it, but after that, I didn’t touch it for like 5 years. Well, if you think about it, that’s good enough. Unlike the content on Facebook, resumes don’t need to change so often. Even a five-year-old resume is still useful. So, all that LinkedIn had to do was to get everyone to set up their profile once. They didn’t need the users to constantly use the site. Their kindling strategy was there from the start; it just wasn’t so obvious.
In the old days, if you tried to sell baskets that you weaved, the main criteria by which your potential customers made their purchasing decisions was how useful they were and how nice they looked. Today, the main criteria is how cheaply they can buy the same thing elsewhere. I would call this “arbitrage economy” because it’s about gaining from market discrepancies/inaccuracies (finding something cheaper in one market and selling it for more in another market). Arbitrage was a game that only the social elites who had access to necessary information played in the old days; now everyone plays it. This makes it impossible to have a conventional business locally as you would always be competing on price with everyone around the world.
The types of local businesses that are relatively immune to this effect, like those that cannot compete globally for physical limitations (like restaurants, firefighters, and nurses), would also be affected as more people flock to those jobs.
But to start an unconventional/novel business is extremely difficult and risky, and requires specialized knowledge and access to capital. Everything is specialized these days; business is no longer something you can learn on the side. Even if you don’t have an MBA, you would need to be equivalently competent in “business administration”.
Just think about it. If you studied archeology in college, by the time you graduate, your knowledge of that subject would be far superior to the average people who did not study it. This means the gap between your knowledge of business as compared to those who studied business in school would be about the same. You simply won’t be able to compete with those business majors because today’s economy requires specialized knowledge of business in order to have a viable business. If you don’t, you simply become an arbitrage opportunity for the business majors.
Even though tangible skills and talents are still required for any business to exist, you cannot make money without playing the game of arbitrage. Right now, business-savvy people can still find many arbitrage opportunities precisely because there are still many people around the world who do not know the accurate prices for what they are offering, and that’s partly because they’ve never studied business.
Recently there has been a lot of talk about “skills gap”. Although there are 4 million unfilled jobs, 11 million people remain unemployed in the US. The most common solution suggested by many, including President Obama, is to attract more people to “STEM” fields (science, technology, engineering, and math). I believe this is misguided, or too short-sighted. By the time our kids in high school graduate from college, STEM fields may already be crowded. The key to closing the skills gap isn’t to choose a field with higher demand, but to increase the speed at which we learn any skills. The main problem in skills gap is that the world is evolving much faster than we can learn. The “half-life” of any professional skill is becoming increasingly shorter. To thrive in today’s technological society, we have to be able to learn fast, and on our own.
To illustrate this, let’s look at the popular Bell curve of “Innovation Adoption Lifecycle”.
The numbers at the bottom indicate what percentage of people belongs to each segment. The horizontal line also implies time; how innovation is adopted by the entire population over time, “the diffusion process”. (See Wikipedia for more ») Let’s think about how skills necessary to use these innovations shift across the timeline.
Innovators innovate something revolutionary that changes the way we live or do business (more efficient, faster, higher quality, etc..), which creates new demand for people who have the skills to use the innovation. These skills are highly useful and valuable at first but as time passes and as more people learn the same skills, they shift towards the right. And, some years later, they reach the “Laggards”. At that point, those skills are useless because new innovations would replace the now-old innovation. The question we should ask is how long does this process take? (What is the time it takes for any skill to lose half of its value?) Naturally, this would be different for different fields. Some are more time-sensitive than others. What we call “skills gap” is essentially the difference between the time it takes any skill to lose its market value and the time it takes an average American to learn a new skill. “Skills gap” is not about the gap between the fields with high demands and the fields where the workers are. So, how can we increase the speed of our learning?
After finishing our basic education, we go out into the real world and learn real skills. At first, we are behind the curve, and we have a lot to catch up with. We need to move towards the left as quickly as we can. If you are fast enough, you would reach a point where you’d learned everything you could learn from other people. You would be on your own. You would have to innovate at that point.
At each segment, a different method of learning becomes necessary. Below is a list of those methods.
Innovators cannot learn from anyone because they are trying to solve problems that have never been solved by anyone. They have to pave their own way. How to learn something on our own is not something we can learn in school (it would be an oxymoron).
Early Adopters learn from Innovators but the learning materials haven’t been prepared formally by anyone. Most innovators have no time to be writing tutorials, books, or teaching classes. Early Adopters have to be very good at finding and putting together pieces of puzzle that are scattered all over the place.
Early Majority learn by reading materials informally shared on websites and online discussion forums by Early Adopters. These mediums are easy to use and cheap, so Early Adopters are willing to contribute but they would not go as far as to make it easy for anyone to understand.
Late Majority learn from tutorials, books, and classes. It takes a significant amount of time, energy, and money to prepare these learning materials that can be consumed by a wide range of audience, so it wouldn’t make sense to create them until the dust of innovation has settled and “best practices” have been established.
Laggards are the last holdouts. They do not want to learn unless it becomes absolutely necessary. And, by the time they learn anything, that particular skill would already be obsolete.
How do we learn how to learn on our own? Actually we’ve all known this as babies. As infants, we were largely on our own since we had no language to transfer knowledge. We explored and figured things out on our own. At some point in our childhood, the societal pressure to learn became greater than our own desire to learn, at least for most people. This is when we began losing our ability to learn on our own. The adults around us started pressuring us to learn so many things that we no longer had time to learn things we felt curious or passionate about. Eventually we gave up and just let the grownups tell us what and how we should learn. This is when self-discipline became more important.
But to be an Innovator, self-discipline is not enough. Self-discipline cannot supply the speed necessary to move all the way across the curve. You need to be driven by passion for that. Innovators and Early Adopters are driven by love/passion for what they do whereas Early Majority are driven by their needs and self-discipline. Although self-discipline is a good quality to have, it is still no match for the power of our passion. When you are passionate about something, you don’t need much self-discipline, because the work you do energizes you, instead of draining you. Our work becomes the source of our energy, not where we expend our energy.
When you search the Web for how to learn skills quickly, you find everyone talking about how to manage our own self-discipline. By following their advice, you could reach Early Majority but it would be difficult to go beyond it because the top 16% is filled with people who don’t need self-discipline; because they love what they do. This difference can be seen between Japan and the US.
The Japanese culture is more conformist than the US. Because of this, more people are concentrated at the center of the Bell curve in Japan. At the right end of the curve, literacy is almost 100% in Japan while 14% of Americans are illiterate (and hasn’t changed in a decade). But at the other end, the US has more exceptionally skilled, creative, and/or talented people, and economically this matters a lot because the value (or productivity) across the curve does not increase linearly but exponentially. Especially today with powerful search engines, everyone can find the best and the cheapest of anything in the whole world. The best is many times more valuable than the second best. This is partly why the top “1 percent” gets everything; because we all flock to them.
So, what is the solution for “skills gap”? To increase the speed of learning. How do we do that? By finding our passion. How do we find it? If you’ve already lost it, and can’t find it, managing your self-discipline would be your second best solution (as many people advise on the Web). But for our children, I think the best solution is to keep their own passions alive; not overwhelm (and eventually extinguish) their passions by pressuring them to perform well academically. It’s like protecting the flame from the wind when starting a campfire until it is strong enough that it no longer needs your care or protection.
To commoditize everyone else’s job. Let me explain what I mean by that. 99designs.com is a good example. Let’s say, you need a logo designed for your business. You describe what you want, post it on 99designs.com, and a bunch of designers from around the world would design a logo and present it to you. You only pay the winner. It’s a great service. It takes advantage of the fact that somewhere in the world there are always some people who are willing to design a logo for free because it’s fun. But if you are a professional designer, this is bad news. Now your career has been commoditized by 99designs.com. There is less demand for your service, and your fee will come down. Even if you are a high-end designer, this will still affect your business because many other designers will try to climb up the ladder to compete and survive. It brings down the entire market because graphic design as a whole has been commoditized by services like 99designs.com.
This has been happening in just about every market, not just graphic design. Take florists for instance. Now pretty much all the florists have been commoditized by the online services like FTD.com and 1800flowers.com. The creative aspects of being florists have been stripped out of their business, and all they do is to reproduce flower arrangements in the photos supplied by one of these websites, and to fulfill the orders that come in through them. They are essentially running a franchise business, like McDonald’s.
Bookstores are now beyond commoditized; most of them are out of business. You think restaurants are safe? Not really. Many of their customers are now coming in through OpenTable.com. Many restaurant owners hate them because they take a hefty cut from their already thin profit margin. For the lower end restaurants, sites like Seamless.com and Delivery.com do the same thing. Some of these restaurants are now becoming order fulfillment services for these websites. This will only get worse as more people get hooked on the convenience and the efficiency of online ordering.
You think Etsy is great? Yes, if you want to buy crafts for cheap. But if you are trying to make a living doing crafts, Etsy is your enemy. Every time someone comes up with a great idea and posts something beautiful on Etsy, through the brute force of efficiency, a whole bunch of copycats would find your hot idea, make and sell the same thing for much lower prices. This brings down the entire market of handcrafted products because there are always somewhere in the world people who are happy to sell their products at cost, or even at a loss, because it’s fun for them to do crafting.
I could go on and on. Every market is being commoditized in the same manner. So, what do you have to do to survive in this new environment? You need to commoditize your own job before someone else does. Let’s say you are an architect. Don’t practice architecture yourself. Use your domain knowledge to commoditize the market of architects. Whatever career you have, don’t practice it, just commoditize it. That would be the only way to survive in your market. Otherwise, sooner or later, other people will, and you will be working for them at a much lower price. Whoever succeed in commoditizing your market will make all the money, and you would be sharing what’s left of it with the 99% of people in your market.
Now there is much talk about the 1% versus 99%. Before you point your finger at the 1%, think of how you are spending your own money. These days, we are able to find the best products at the lowest possible prices quite easily. Since everyone can do this easily, everyone flocks to the best product, AKA “the top 1%”. And, the top 1% can be anywhere in the world; we can find them and have them shipped. So, if you are using any tools of efficiency to find the best products, the lowest prices, and to save time, you are guilty of creating the income disparity. You might argue that the government should tax the riches, and flatten the income distribution. Well, that would not work. If your government did that, your whole country will become uncompetitive and everyone will lose.
What is now happening globally is sort of like climbing on top of each other in order to escape the hot lava. There is nothing to climb on top of except for each other. The world cannot go on forever in this manner. The brute force of efficiency will soon hit the saturation point where there would be no more inefficiency to exploit. During the golden era of arbitrage, people exploited market inaccuracies to make fortunes. Now, we are applying computer programming to exploit every inefficiency we can find. We are playing musical chairs and the music hasn’t stopped yet.
Just as we cannot objectively measure greatness of love, we cannot objectively measure effectiveness of communication either. Nobody can be a great lover to everyone, and nobody can be a great communicator to everyone. On a résumé or a help wanted ad, it is useless to list “communication skills”. Everyone is a good communicator to the people they surround themselves with. Nobody thinks they lack “communication skills”. In fact, if someone claims he has “great communication skills”, it is a pretty good sign that he has a poor understanding of what communication is. If you need someone to manage communication, you should look for someone who understands the fundamentally nebulous nature of communication, not someone who thinks he knows what he is doing. (This is true for user-experience designers too.)
Most of us learned to communicate intuitively, just as we learned to walk intuitively. If asked to explain how we do what we do, we have a hard time articulating it. Most people have no theoretical understanding of how we communicate. It’s very much like songwriting. You don’t need to study music theories to write great songs. Some of the best songwriters never studied theories. But there are certain things in life that we cannot achieve without theories, like writing a symphony. Theories allow us to expand our possibilities beyond our own intuitions and talents.
Theory is a method of generalization. Without a theory, we can apply our knowledge or skills only to specific things. A particular skill may work for a particular task very well, but it may not work for a different task, and you wouldn’t know why it doesn’t work, because you don’t know why it worked for the original task. For you to be able to apply that skill for other tasks, you would need to understand the general principle of your skill, why and how it is working for the original task. To go beyond the specific application of your skill, it is not enough to master the skill; you need to step outside of yourself and analyze how you are doing what you are doing. Self-proclaimed “great communicators” haven’t realized the need to do this, and the mastery of communication within their own bubbles have them convinced of their own greatness.
Those who have never theoretically studied communication (i.e. who can communicate only intuitively) tend to take their own knowledge or point of view for granted, because they have no objective understanding of their own mental processes. This makes them poor instructors/teachers. When someone is trying to explain to you how to get to his house. He might say, “When you get out of the subway, walk towards the church,” overlooking the fact that the church is visible only if you exited the station in a specific way. It doesn’t occur to him to instruct you which way you need to turn after you exit the turnstile because he himself always turn the right way without thinking about it.
If you do not regularly evaluate your own mental processes, your natural tendency is to assume that the behaviors that worked well for you in the past are universally right behaviors for everyone. Say, you spoke to a girl at a bar in a certain way and were able to get her phone number. You would then assume that you discovered a right way to talk to all girls. You might even try to teach other men how to talk to girls.
Communication takes at least two to tango, and the specific combination of players determines the effective way to communicate. It is not possible to establish a standard of effectiveness for communication because everything is a variable and the possible permutations are infinite. Nobody is a master of communication to everyone, although it is possible for someone to be a master of communication to a specific type of people. Even if you are good at picking up fashion models at nightclubs, it does not necessarily mean that you could seduce a bookworm at a library. When trying to figure out the best way to communicate, you have to first evaluate who is talking and who is listening, before you begin to think about how or what to communicate.
Writing teachers often tell you to “know your audience”. This is true but it is equally important to know yourself. There are two aspects to knowing yourself: Who you think you are, and what other people think you are. They could be quite different. For instance, when you are speaking in front of an Iranian audience, whether you are another Iranian or an American would significantly influence the outcome. You may think you know yourself very well but that wouldn’t help you in this situation. You yourself might not consider your own nationality as an important aspect of yourself but the audience might not see it that way.
When I first came to the US in the 80s, the relationship between Japan and the US was contentious because of what was happening in the automotive industry at the time. My mere presence in the same room could influence the way other people talked about cars in general. This certainly complicated my communication. My association with Japan would color the way they perceived me and what I said, and it wasn’t possible to simply ignore it.
This too can fall under the same category of knowing your audience but it’s an aspect that is easy for us to overlook because when someone tells you to “know” something, your tendency is to observe that object or subject in a scientific manner where you do not take into account how the observer influences the observed.
For your communication to be effective, you need to take into account who your audience is and who they think you are, and employ a “tone” that would allow you to achieve the desired effect. By “desired effect”, I do not simply mean the smoothest way to communicate. Depending on the situation, your objective might be to annoy or anger your audience. This is an important point to keep in mind because we tend to assume that giving the audience what they want is the ultimate objective of communication in general, but this isn’t always true. Photographers sometimes treat their subjects rudely as soon as they arrive at their studios, in order to capture angry expressions. Effectiveness of communication is measured by how closely you matched your own desired effect, not by any sort of universal standard.
To be an effective communicator, you need to behave like a chameleon. In the West, behaving like a chameleon has negative connotations, but in Japan, it is expected of everyone. If you were to observe one Japanese person in Japan throughout the day, you would notice significant shifts in the way he acts. Even the language itself changes depending on whom they are talking to. Just by reading a few lines of written dialogue, you could guess whether this person was talking to someone older or younger, man or woman, at work or at home, etc.. The mutable nature of self is so deeply assumed in Japanese culture that it reflects in their language.
As this blog article concisely explains, “voice reflects the nature of the author, while tone reflects the nature of the intended audience.” This is an important distinction to keep in mind for all forms of communication, not just writing. Even in speaking, it is more effective to change your tone depending on who you are talking to. This does not necessarily mean that you are trying to pretend to be someone you are not. What should come through, despite your shifting tones, is your “voice” which still allows the audience to recognize you. But “voice” is not something you consciously craft. As your creative expressions mature over time, your voice would naturally emerge, almost unintentionally. In this sense, there is no need for you to be concerned about your own voice. It is for your audience to perceive, not for you to control or manipulate.
Most people do not need to communicate with people outside of their own worlds. If you are a scientist, you would most likely be surrounded by other scientists. If you are a banker, by other bankers. A lawyer, by other lawyers. In these situations, relying on intuition is sufficient to become successful, just as great songwriters do not need to understand general theory of music. But to pursue communication as a profession, we cannot simply rely on our own intuitions. We must study a variety of theories, such as cultural anthropology, linguistics, psychology, semiotics, sociology, etc.. And, I believe, the broader you study, the better. After all, communication is a form of translation. It is about connecting the dots. We can let the specialists fill in each dot. Our job as a professional communicator is to recognize which dots need to be connected. For that, we need to see the world holistically.
Also, we as professional communicators do not necessarily have to be good speakers, writers, or visual artists ourselves, just as great composers do not have to be good violinists, trumpeters, or percussionists. Our key competency is our theoretical understanding of communication, and our ability to generalize what we learned. But let’s not delude ourselves; communication is something we will never fully understand no matter how much we theorize. Communication is not a skill, just as love is not a skill. We can only aim to increase the probability of success through our theories; failures will always be unavoidable. It is not possible to come up with a universally right way to communicate. Every instance will have different variables and permutations of players. Every need to communicate will require a different solution. Our job is to increase the odds of success, not to pretend as though we know the right answer.
In the past 10 years, digital photography has disrupted the market of photographers so much that many of them are now struggling to survive. There are many reasons but one of the most significant factors is the accessibility of the medium. Digital cameras allow us to take as many photos as we want at no cost. In the days of film, the cost of film and processing was a significant barrier to entry; once photographers crossed the barrier, they were in good shape. The barrier protected them from a flood of wannabe photographers. Another barrier to entry was technical competence. Photography used to be a lot more technical, and being able to master the physics of light and to operate complex equipment protected photographers from potential competitors who couldn’t. Digital photography destroyed these barriers, and now the market is flooded with self-proclaimed “photographers”. And, all of us will become increasingly better at photography as we now carry high quality cameras in our pockets everywhere we go, and can easily share them online to get feedback.
What we can extrapolate from this is that it no longer makes sense to tie ourselves to any mediums. Every medium has (or had) skills associated with it. And, these skills took a lifetime to acquire, but they can now be embedded into the tools we use. If your job title is tied to any particular medium, it’s likely that your market will be disrupted sooner or later by technological innovations. Postmodern artists have been aware of this for many decades. They simply choose mediums that can best express their ideas, and are not married to any of them. They are medium-agnostic. Andy Warhol was a good example. Now that everything around us evolve so fast that we all need to be medium-agnostic, and focus not on what skills we have (or should have) but on who we are.
What does it mean to be medium-agnostic? Photographers, for instance, need to embrace the reality that it’s not their knowledge of how to operate their photographic equipment that is relevant or valuable to the market today. The glass-half-full perspective is that their creativity has now been freed from their medium. They are simply visual artists; free to explore other mediums. Some photographers enjoy making objects that they photograph; if so, perhaps they are actually industrial designers. Some photographers love directing people to act or look certain ways. Perhaps they are really directors. Their ability to direct people can be applied to many different fields.
Today, what does it mean to be a “writer”? Writing is a medium. It tells us that you have a certain skill, but it tells us nothing about who you are, how you think, what you value. It’s better to be a lawyer, entrepreneur, philosopher, doctor, scientist, marketer, or activist who can write and has something unique to say, than to be providing a service of writing for others. It’s just a matter of time when computers become smart enough that they can take our basic ideas and write articles for us. Some writers are great story-tellers. Their talent does not have to be tied to writing. I know a writer who applied her story-telling theories and philosophies to the field of user experience for websites. Her ideas have gained significant traction in the field.
What we call “skill” is a quality that we can measure. When it cannot be measured, we tend to use the word “talent” or “creativity”. Skills express “what” we are, and creativity expresses “who” we are. What we are can be compared and measured, but who we are cannot be. Pursuing measurable things is safer in general. The goal and the path are clear at the start, allowing us to make the choice more confidently and securely. But this also means that many other people will pursue the same roads of certainty. People naturally flock to certainty. Now with competitions becoming increasing global, our children will be competing with billions of people from around the world, head-to-head. Their citizenships will carry no inherent advantages or disadvantages. Their skills and knowledge will be fungible commodities, like salt, crude oil, and metals. It’s hard to survive if you are just a grain of salt. Our children should learn to feel comfortable traveling the road of uncertainty. It’s a waste of time and energy to compete with billions of others for academic excellence if academic competence will just be fungible commodity. If they are to have a meaningful life, they will have to pursue who they are, which cannot be compared or measured, for which there is no competition, to which no medium is tied.
If you Google “Twitter is stupid”, you will find many people asking what Twitter is good for and why some people love it so much. They have tried and found it utterly useless. I did too. Since Twitter was founded in 2006, I’ve tried at least three different times in the past, dedicating a significant amount of time learning about Twitter and using it, and every time, I failed to understand the point of it. Sure, we all have things we don’t enjoy that others passionately love. I have no interest in watching sports, but I can at least understand why many people love it. What bothers me about Twitter is that I do not understand it even theoretically. But now I think I’ve finally solved this big mystery.
First let’s consider the arguments against Twitter. Twitter cannot function effectively as a news aggregator (like RSS readers) because so much of the content is people making casual remarks. Important pieces of information get buried in the noise. Twitter cannot help us keep in touch with our friends and families because most of us do not actively use it. (Most of my own “followers” are strangers.) We cannot share meaningful ideas because of the number of characters allowed is capped at 140. It’s just enough to share the title and the URL of the article you want to share. It’s not a great place to share photos either; Instagram can do that much better.
So, why do anyone use Twitter? There are a few factors involved, and I found some explanations on the Web but the key concept that we need to understand is how our sense of relevance is distributed across time. Let’s call it “time-relevance distribution”. Compare, for instance, a Wall Street trader and a carpenter. In order to do their jobs well, they need information. Suppose we throw a hundred random pieces of information at them, and they are to pick the ones they find useful or interesting. We then place them on a timeline by looking at the time associated with each. One may be about the latest iPhone that Apple announced 10 minutes ago. The other may be about the research conducted on the archival characteristics of different types of wood which was published 10 years ago. Let’s say, the Wall Street trader found the former useful, so we place the former on the timeline at 10 minutes. The carpenter found the latter relevant to his job, so we place it on his timeline at 10 years. If we are to repeat this for all 100 pieces of information, we would probably get two distinct curves on the timeline. See the hypothetical graph below.
As you can see, the distribution of the Wall Street trader, as represented by the blue area, is heavily skewed to the recent past. To him, if any piece of information is more than a day old, it’s not particularly relevant or useful. In comparison, the carpenter, as represented by the red area, is not particularly concerned about the timeliness of the information. In fact, he is more interested in matters pertaining to timelessness.
Journalists, fashion designers, event planners, and weather forecasters are very much like the Wall Street trader. And, English teachers, philosophers, farmers, and whiskey makers are more like the carpenter. And, I believe we all naturally gravitate towards timeliness or timelessness. This is the key difference between those who use Twitter and those who don’t.
Twitter in real life is equivalent to CB radio where truck drivers share information about what is going on now with other truck drivers who happen to be near one another (who are also interested in the same information). Most of them wouldn’t be interested in hearing a recording of their communication from an hour ago.
In contrast, Facebook is equivalent to Post-it notes we leave for other people to see. We do not expect the others to be paying attention to it in real-time. (If the others are here now, we would rather tell them verbally.) On Facebook, we leave the information for others to consume within a day or two. The common mistake that people (including myself) make on Twitter is that they leave messages thinking that people would read them later (as they do on Facebook), but this is not what Twitter is really for.
Blogs extend time-relevance further. A typical blog is equivalent to a bulletin board where fliers can remain push-pinned for weeks. Some blog posts remain relevant for years, and people do find and read them through search engines. Many of my old posts continue to receive as many visitors as my latest one, because they enter my site directly to individual posts, bypassing the home page. The impact that home page has on the popularity of individual contents is becoming less significant these days because Google is functioning as their home page.
Time-relevance distribution of young people tends to be skewed toward the present, like that of the Wall Street trader above, because their own life experience is skewed towards the present. If you are 20 years old, things that happened in the last 20 years would naturally be more interesting to you than those that happened 50 years ago. And as we grow older, we become more interested in what remains timeless because we witness them in our own lives. (Teenagers cannot personally experience timelessness.) People who live in the city tend to be skewed towards the present than those who live in the country because things change much more quickly in urban areas.
Because time-relevance distribution is different for everyone, we naturally choose mediums that are more appropriate for our own. If you are primarily concerned about timeless matters, you are going to have a hard time understanding why anyone would use Twitter because there is nothing interesting for you in the highly compressed area of the timeline for which Twitter is optimized.
Twitter in essence is a reincarnation of AOL chat rooms. In the early 90s, most people weren’t paying much attention to the Internet until AOL introduced the idea of chat rooms. The rooms were generally divided by topics of interest, and strangers who shared the same interest came in and chatted with one another. Eventually they would form loose communities. This idea of chat rooms gradually waned in popularity but Twitter picked up where AOL left off. On the facade, Twitter does not look like an instant messaging program, but it does the same thing. The concept of “room” is replaced by “hashtag”. Hashtags form temporary rooms where those who share the same interest gather and chat. This is why the 140-character limit is still prevailing on Twitter; if you think about Twitter as a platform for real-time conversation (like instant messaging or texting), why would you need more than 140 characters? In a face-to-face conversation, if you kept talking for more than 140 words (without letting the other person talk), you would be considered rude.
Most of us use instant messaging or texting even if we don’t use Twitter. The only significant difference between the two is that Twitter is designed to connect and chat with strangers. It’s a public instant messaging platform (like AOL chat was). Many married people use instant messaging and texting with their own spouses to coordinate domestic duties. In the same way, Twitter users are sharing and coordinating information about events that are happening now, but with strangers. This is why it is highly useful for states of emergency like the Boston bombing and Hurricane Sandy. It is also useful for businesses like airlines, event venues, and Internet service providers who need to communicate with their customers in real-time on a public platform (so that they would not need to repeat the same information to every customer). To find Twitter useful, you must be interested not only in timely matters, but also in connecting with strangers.
My conclusion: If you do not have any need to coordinate and share information on a real-time basis with the general public, Twitter is not for you. For my own business, I need to be informed of the latest technologies but whether the information is an hour old or a month old makes no material difference. In other topics, I’m more interested in timeless matters. I would imagine that Twitter is useless for the majority of people. It is a tool for those whose time-relevance distribution is heavily compressed into matters of minutes and hours, not days or months. Very few people have such needs, but it does not necessarily mean that they are doing something “stupid” or superficial. The common hostility towards Twitter users comes from the fear of not knowing/understanding what they do. I too had this nagging feeling that I’m missing out on something important to everyone, which is what lead me to investigate. Now I think I can leave them alone in peace.