tymshft

There is nothing new under the sun…turn, turn, turn

Archive for the month “May, 2013”

The term “server” will…um…stay around

In a recent post, Jesse Stay discussed two trends that are affecting people in their homes.

First, people have more “computers” in their homes. I can remember a few short years ago when our family only had a single computer. That is no longer the case in our home, and it is certainly no longer the case in the Stay home: his post mentions Xboxes, a Nest, a Fitbit, a Sonos, and “other devices.” All of these devices have computing capability.

The other trend is the increased use of the cloud for storage. While the aforementioned Xboxes get data from “a server in the closet of my office,” many of the other devices use “the cloud” for storage. What this means is that Stay does not need a server in the home for these devices – the data is stored in a server farm in North Carolina or somewhere.

Based upon these two trends, Stay opens his post with the following statement:

I’m going to go on record – the name “server” is going extinct.

Now I should emphasize that Stay is talking about home use, not enterprise use. Obviously Google and Amazon and the like are still going to need servers. But Stay is saying that for the home user, he believes that the concept of a local server is going to become obsolete – all the data will be stored somewhere else.

Well, Jesse, while you may not want to use that six-letter “s” word in your home, I’m not quite ready to banish it from my home just yet.

Why? Because of my pendulum theory.

If you haven’t encountered my pendulum theory, I first proposed it back in 2009 (although I didn’t use the word “pendulum” in this initial statement of the theory). I used it in reference to Amazon’s mimicking of something that CompuServe did long ago.

Basically, ever since computers were invented in the 1940s or the 19th century or whenever, the computing industry has oscillated between two different models of computing:

* The Benevolent Model, in which a central service provides everything that the users need, including programs and processing power. All the user needs is a dumb terminal, something that acts as a dumb terminal, or something even dumber like a punch card reader. The central service takes care of everything for you. There is nothing to worry about. Dave?

* The Rugged Individualist Model, in which a computer user doesn’t need anybody else to do anything. A single computer, in the possession of the computer user him/herself, includes all of the power that the user needs. We don’t need no central service; we don’t need no thought control.

Now obviously these are the extremes, and there have been some computer trends (like client/server) that somehow combine the two. But it still seems like we alternate between the two models, and now the cloud computing model has us all leaning a little more toward the centralized model.

In the Stay household (ignoring the Xbox for the moment), data is stored by a central service – actually multiple central services. The important thing is that it’s not stored in the home itself.

But I don’t think that it will always remain that way.

A few years from now, the Stays may run into an instance in which the cloud rains on them. Perhaps some cloud provider will have a major security breach. Perhaps a cloud provider will jack up its storage rates. And the Stays, being of a technical bent, may end up saying to themselves, “It would be cheaper and more secure to store this stuff here in the house. I think there’s some room in the closet of Dad’s office where we can stick a server.”

And the six-letter “s” word may be spoken in the Stay household yet again.

P.S. I do want to correct one misstatement that I’ve made in my previous posts on this topic – specifically, in my November 5, 2011 post in which I said:

Now I’m sure that Google and other services put enough redundancy into their systems to minimize the occurrence of outages. But there is no “cloud” company – none – that can guarantee 100% availability to its users. It is literally impossible to do so.

Now the statement itself is accurate – but its implication is wrong. For while it is true that no cloud service can provide 100% availability, it is also true that no self-storage solution can provide 100% availability. As someone who has gone through a hard drive failure, I should have known this better than anyone. (Incidentally, that hard drive failure occurred at the same time that Facebook acquired FriendFeed. Go figure.) So the question remains – can cloud services provide higher availability (and accessability) than you can if you store things yourself?

Advertisements

Chris Hoeller isn’t doing anything. Welcome to the future.

I just read an interesting Google+ post from Chris Hoeller entitled “Automatic Technology.” Here’s an excerpt:

The next benchmark innovation is “automatic technology,” a coin I’ve phrased that encompasses wearable tech, embedded systems, and self-driving cars.

They all go hand in hand to create a seamless experience for the user. Imagine that your entire house, your wearable device, and your car are all apart of the same system. The technology knows where you are and what [you’re] doing.

And, more importantly, you don’t have to DO anything. Hoeller goes through a scenario that includes the following:

You are about to leave your house and you pop on your Google glasses or watch and they automatically power on. The system shifts to this mobile mode without you having to do anything….

You get in your self-driving car and the system automatically knows to switch to that. You input where you want to go, and it does the grunt work for you….

You can watch a movie, make a phone call, or surf the web without thinking about it….

Now there have been labor-saving devices for millennia. The calculator allows you to perform math with minimal thought. The washing machine lets you throw clothes and soap into a tub, and the clothes just wash. The wheel lets you move stuff around without breaking your back.

But notice Hoeller’s use of the word “automatic.” With calculators, washing machines, and wheels, you still have to do SOMETHING. We’re now moving toward a time when things just happen. You grab your wearable device, and it automatically powers on and activates. You say “I want to eat with psychos!” and the car drives to Amy’s Baking Company.

There’s still a little bit of interaction, since you still have to put the wearable device and you still have to speak your destination. But it is becoming more and more automatic.

But what happens when the automatic technology becomes PREDICTIVE technology?

Freedom vs. privacy – the Federal Trade Commission’s view

In my Empoprise-BI business blog, I recently introduced one possible solution to the tension between freedom and privacy.

So let me present my Empoprises Rule Regarding Recording Freedom and Privacy:

I am allowed to record anything that I want.

No one, however, is allowed to record me unless I say that it’s OK.

For some reason, some of you may think that this is not a good rule to apply to society. However, I don’t see any problem with it myself. 🙂

James Ulvog doubts that my proposal would work if it were universally adopted.

I think there may be a few little implementation issues if he ever is around another person who has also adopted his rule.

So I’ve continued to search for a better solution. My search is not only motivated by the recent discussion of Google Glass, but also because this conversation impacts upon my day job. (Needless to say, the opinions expressed in this post do not necessarily reflect the views of my employer MorphoTrak, who offers facial recognition products.)

And facial recognition, one of the technologies that happens to be offered by my employer, has popped up in a couple of instances over the last few days.

If you follow Jesse Stay on Google+, you may have noticed that he asked the following question a few days ago:

…any devs with strong facial recognition and object scanning tech experience interested in partnering on building something with my Google Glass?

In a post published today, Stay shared a possible solution:

In your Google+ account settings there’s an option to notify you if someone “Shares a photo or video with me that I might be in.” Enable that and even set it to send you an SMS when it happens. When someone takes a picture of you via Google Glass and shares it to Google+, it should notify you. Approve that, and now they know who you are.

Of course, it’s a bit of a hack, and the person you’re taking a picture of must be using Google+ and have this enabled to work, but it is a way to know who you are taking pictures of.

Basically Stay has taken two separate technologies and hacked them together to come up with a solution. The fact that both technologies are Google technologies is a happy accident; it could just as easily been technologies from different companies.

Of course, Stay’s solution only works if both people have opted in. But you may not necessarily have to opt in yourself for your data to be available to facial recognition software. This was reinforced in a recent 60 Minutes report that described an experiment by Carnegie Mellon’s Professor Alessandro Acquisti:

He photographed random students on the campus and in short order, not only identified several of them, but in a number of cases found their personal information, including social security numbers, just using a facial recognition program he downloaded for free.

And all of the protections that you personally implement regarding your data may be for naught. One example:

“One of the participants, before doing the experiment, told us, ‘You’re not going to find me because I’m very careful about my photos online.’ And we found him,” says Acquisti, “Because someone else had uploaded a photo of him.”

And that applies to other information about you, some of which is either public by design (home sales information) or public by accident (when a U.S. company accidentally leaks customer ID numbers when the numbers are in the form nnn-nn-nnnn).

Which returns us to our initial question – what is a workable way to strike a balance between freedom and privacy?

Last October Seth Colaner noted that the U.S. Federal Trade Commission (FTC) was working on the problem, and had issued a report entitled Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies. According to Colaner, the report presents an issue that many of you already know – because of the combination of technology and data, it is possible to identify people who were previously anonymous.

Now the FTC does not have the power to legislate – only Congress can do that. (And, of course, it goes without saying that neither the FTC nor Congress has any legal standing outside of the United States.) But the FTC can certainly recommend, as Colaner notes.

The FTC report boils the above down into three short and sweet principles:

1. Privacy by Design: Companies should build in privacy at every stage of product development.
2. Simplified Consumer Choice: For practices that are not consistent with the context of a transaction or a consumer’s relationship with a business, companies should provide consumers with choices at a relevant time and context.
3. Transparency: Companies should make information collection and use practices transparent.

While the FTC’s recommendations are laudable, there’s another tension that prohibits wide adoption of them. And that’s not the need to strike a balance between freedom and privacy. It’s the need to strike a balance between profit and transparency.

To be continued…

How demographics changed daytime television after 1980

Before I get into this tymshft post, I wanted to briefly go off-topic and mention a wonderful Google+ community called Alternate History. Perhaps someone there will write a “what-if” scenario entitled “What if U.S. morning television had remained the same, despite demographic change?”

Of course, the person who wrote such an alternate history would have a tough task, since many people today would not recognize daytime television from the 1970s.

Young people today may not believe it, but in the 1970s daytime television was entirely occupied by game shows, soap operas, and inconsequential talk shows. The reason for this was an admittedly sexist assumption – since the men of the house were working outside of the home, and since the women of the house were housewives, some light entertainment was needed to occupy the women when they weren’t cleaning and baking.

While one can truly question whether this was ever true, the three (at the time) commercial television networks certainly thought it was true, and structured their programming accordingly. Unbeknownst to the network television executives, however, the demographic landscape had changed. There were fewer and fewer housewives as more women worked outside the home. More and more of the daytime television audience consisted of college students. And at the same time, technologies were emerging which allowed someone to videotape a show during the day, and watch it in the evening. Because of these factors, daytime television presented an opportunity for the networks to present edgy programming – if only they realized it.

The first network to engage in such programming was NBC. While some would later say that Fred Silverman’s genius was responsible for the change, even Silverman himself subsequently admitted that it was all an accident. NBC was being trounced by CBS and ABC, and Silverman was desperate to try anything to escape the cellar. While much of his enormous energy was concentrated on NBC’s prime time schedule, he also paid attention to other parts of the schedule. NBC had a young comedian named David Letterman under contract, and Silverman decided to put him in the morning slot, cancelling several game shows to make way for The David Letterman Show, which premiered on June 23, 1980.

It soon became apparent that this was not your typical show for housewives. One of the earliest indications of this was the appearance by Andy Kaufman as a guest.

By the end of the summer of 1980, David Letterman was the most talked-about personality on television. People began to call in sick or claim to have car problems so that they could stay home and watch the show – in fact, “Stupid Car Problems” became a recurring theme on the show.

Networks, as they always do, attempted to counter-program Letterman’s success. While ABC’s morning show with Andy Kaufman himself was not successful, CBS found its own comedian, Jay Leno, and built a show that eventually surpassed Letterman’s. After the Kaufman failure, ABC tried a different tack, luring newsman Edwin Newman away from NBC to launch a hard news show called “Dayline.”

By 1985, the transformation of morning television was complete. The soap operas and game shows were moved from the morning schedule to the evening schedule, although many commented that “Wheel of Fortune” and “Dallas,” while successful in so-called “prime time,” could never make it in the mornings. (Even today, the block of programming between 8:00 pm and 11:00 pm Eastern time is still called “prime time,” despite the fact that this is an almost forgotten part of the schedule.) Daytime, however, was a hotbed of activity, as hard news shows hosted by Newman, Pat Buchanan, and Al Franken battled against edgy talk shows from Leno, Letterman, and Phil Donahue. ABC tried to buck the trend by heavily promoting a lighter talk show from Oprah Winfrey; the show, however, was a complete failure, and was quickly replaced by a hard news show with Geraldo Rivera.

While many of the daytime personalities have changed – Al Franken, Pat Buchanan, and Phil Donahue left television for the U.S. Senate, and Leno and Letterman have long since retired – the nature of daytime television programming remains the same, even in 2013. But today’s stars well understand their debt to the pioneers of daytime television. Recently, talk show host Michele Bachmann scored a ratings coup by having David Letterman and Jay Leno appear on her show together. The segment was moderated by Bachmann’s co-host, the resurrected 1980s failed star Oprah Winfrey.

I hope to write a future post to explain how Leno, Letterman, and the like contributed to other massive changes – the proliferation of flexible work schedules, technologies that allowed time-shifting of television shows for those who didn’t have flexible work schedules, and the massive increase in people who worked from home.

That one little catch with all of these new technologies – they’ll cost you

Hiram looked with amazement at Abner’s demonstration.

“And this,” Abner concluded, “is the power of electricity. It will truly revolutionize our lives.”

Hiram almost jumped out of his socks. “So when can all of us have this power of electricity?”

“Trust me,” Abner replied, “Mr. Edison and the other businessmen would love for you to have this power of electricity as soon as possible.”

“This will be amazing,” said Hiram. “It would be handy to have the power of electricity around harvesting time. Perhaps there may be other times of the year in which I could use it also. Although frankly it would be ridiculous to use electricity all of the time.”

Hiram noticed the frown on Abner’s face. “Am I proposing too great of a use of electricity?”

“No,” Abner replied. “Quite the contrary. The businessmen desire that you use electricity all the time. Every month. Every day. And every night.”

Abner continued, despite the puzzled look on Hiram’s face. “You see, Hiram,” he explained, “once you begin to use electricity, you will have to use it forever. Today, it seems to you like a luxury. Later, you will want to use it. But after a while, you will need to use it. You literally will not be able to survive without it.”

“And you will need to pay for the electricity,” Abner continued. “At first you’ll only have to pay a little bit, but a hundred years from now you will have to pay a lot. Your home will have a number of devices that require more and more electricity, including automatic fans that cool your house using electricity, fancy types of phonographs that play records and other things, and new devices that we cannot even imagine today. And every month, you will receive an electric bill. You will have to pay that electric bill every month. And that monthly electric bill may be in the hundreds of dollars.”

That shooke Hiram up. “Hundreds of dollars a month? How can anyone pay hundreds of dollars a month for electricity?”

“Actually, it’s not that bad,” replied Abner, “since people will make tens of thousands of dollars a year by the time hundred dollar electric bills become commonplace. But the important thing is that you will have to pay this every month, whether you like it or not.”

“But can’t I just not pay one month and then get electric service the next month?” asked Hiram.

“No,” replied Abner. “Can you just not eat for a month and then resume eating the next month? It will be the same with electricity.”

“I’m not sure that I like that,” said Hiram. But then he took another look at Abner’s illuminating light and his phonograph. “But then again, perhaps it may be a small price to pay after all.”

As Hiram thought about this, he noticed that Abner was fiddling with a most unusual set of spectacles. They looked like normal spectacles, but there was a strange box on the corner of the spectacles.

“Abner,” asked Hiram, “what are those spectacles? Is this something else that will use electricity?”

Abner placed the spectacles above his nose. “After a fashion,” he replied. “There will be a day when everyone will have to wear these special spectacles. Otherwise, they will fail to detect the driverless car without headlights – oh, never mind. I was just musing on some things.”

(Those of you who are equipped with an electrically-powered computing device may wish to read this thread for some further background.)

Jesse Stay, futurist? Or presentist? (With a little help from the Firesign Theatre)

Perhaps it’s just me, but whenever I hear someone utter the word “paradigm,” my first inclination is to duck for cover. There have been too many instances of people that start blabbering about paradigm shifts and then end up peddling the same old snake oil that scammers have been using for decades. “The social paradigm shift means that you can make a seven-figure income selling toilet paper via Twitter!”

But then there are people whom I respect, and when they use the word “paradigm,” I know that they know what they’re talking about. One such person is Jesse Stay, who said the following while discussing his future plans:

I have secured a wonderful agent with Waterside Literary Agents to represent what I hope will be a best-selling book on the paradigm change caused by social media and the things I’ve learned leading social media for major organizations as well as understanding the software behind them. Stay tuned for that (and any interested publishers please contact me!)

This is not Stay’s first book – I reviewed one of his previous books here. But it appears that this book will allow Stay to share more of his personal experiences. As he details in his post, Stay has spent the last several years working for two large organizations – the Church of Jesus Christ of Latter-day Saints, and Deseret Digital Media. Now while I have theological differences with this church, I recognize the worldwide presence of the LDS, and the efforts of the Church and other organizations (such as Deseret Digital Media, a for-profit entity owned by the Church) to conduct outreach via social media. And as I can attest from working for some large (secular) organizations, it’s hard for an organization that has been around for decades to suddenly embrace new technologies.

Which gets us back to the p-word. In Stay’s case, he speaks of “the paradigm change caused by social media.” Now there have certainly been technological changes that have affected religious and other organizations in the past – television, radio, the telephone, the printing press – but social media introduces some new wrinkles to the equation. Unlike television and radio, it is (potentially) a bidirectional form of communication, and unlike all prior technologies, it is easy to use across long distances. This becomes key when, for example, you work for a French company that is implementing a system in India, or if you work for a church in Utah that is sending people to southeast Asia.

Now I have no idea what Stay is going to write – at this stage, even he may not know the details of what he is going to write – but I’m curious to see if Stay tries to extrapolate into the future. When some people talking about shifting some paradigms around (after taking them out of the box first), they state that the shift has already happened, and we have to deal with it now. But in some cases, the initial shift that we perceive may result in additional shifts in the future. (Radio begat television which begat cable/satellite which begat streaming.)

Well, let’s stay tuned for the book, and I guess we’ll all find out.

Oh, and Jesse, thanks to your post, I now have an earworm. No, this song is not strictly Gene Autry, but it does fit in with the theme of this blog. Just don’t go downstream from it.

Coolidge-era real estate promotion goes sour

It was a peaceful area. People grew crops by their homes here. The neighborhood received a little fame when it staged a production of William Shakespeare’s “Julius Caesar” in a local canyon. The local residents, along with students from the nearby high school, acted in the production.

A few years later, the area attracted the attention of some investors, including some real estate developers and a newspaper publisher. To promote their real estate development, they erected huge signage, with letters 50 feet high and 30 feet wide, along with 4,000 light bulbs. The signage certainly attracted attention to the real estate community.

A few years later, however, the Great Depression hit, and plans to expand the real estate development were essentially dashed, along with many other business opportunities throughout the United States (and, for that matter, the world).

Meanwhile, the old advertising – the 50 foot by 30 foot letters – remained standing. However, the letters were falling apart, and the light bulbs had long since been stolen. The sign was only supposed to last 18 months, but it took 26 years for the city to finally decide to tear the sign down.

However, the city only tore down four of the letters – the letters L, A, N, and D. The remaining letters – HOLLYWOOD – still stand today, long after the real estate origins of the sign have been forgotten.

For more information, see this tweet, which inspired me to look up this story and this story.

How will people configure their information consumption options in the future?

(Note: I actually wrote this in March, but forgot to actually finish it, much less publish it. Since Google Reader is about to go away, however, the post is even more timely than it was before. Perhaps I should have waited until June to publish it.)

I should start this my making two disclosures that will be relevant by the end of this post. The first is that Jesse Stay is old. The second is that I am even older than Jesse.

Why are these facts relevant? Because for several years, both Jesse and I have chosen to consume information via something known as an RSS reader. RSS stands for either Rich Site Summary or Really Simple Syndication, and its purpose is to extract information from a source (such as a newspaper website or a blog) and present it somewhere else. The information reaches the “reader” (either the RSS software, or the person using the RSS software) by means of “feeds” that present all or a portion of the original content to the reader. A good RSS reader allows you to organize these feeds; for example, I can take all of my RSS feeds that relate to California’s Inland Empire and place them in an Inland Empire folder. If I want to see what’s going on in the Inland Empire, I can just look at that folder.

While there were many RSS readers a few years ago, things changed when Google entered the market and created a software application called Google Reader. As Google Reader became more popular, other readers died off or became frozen in time, not introducing any new features and not fixing any bugs.

After a series of unfortunate events (documented elsewhere), Google announced that it would discontinue Google Reader by the middle of the year. While some sought out those alternative RSS readers that were still around, others instead declared that “RSS is dead.” Those who believe the latter asked themselves, what next? How will we get information?

Jesse Stay suggested that an older technology may become the new alternative:

While RSS is great for B2B applications of sharing information and likely won’t go away, from a consumer perspective I think email has won this battle. If your site, which previously had a “subscribe via RSS” button on it doesn’t also have a “subscribe by email” button, it probably should. It is evident to me that while many are searching for a new RSS reader that the answer for many trying to guarantee delivery of content will actually be email.

But while Stay’s suggestion makes sense for him, and for me, does it make sense for everybody? It may not:

Email use dropped 59 percent among users aged 12-17, as well as 8 percent overall, according to ComScore’s 2010 Digital Year in Review. Users between 18-54 are also using email less, though among those 55 and older, email actually saw an upswing.

Young people are turning to social networks to communicate instead–the activity accounts for 14 percent of time spent online in the U.S.. That growth is fueled largely by Facebook….

Now that post is over two years old, and there is at least anecdotal evidence that some teens have even rejected Facebook and the like for newer services. But let’s assume, for the moment, that teens will use Facebook-like services such as Facebook or Google+ to consume information. (I’ll confess that as my Google Reader use has decreased due to the series of unfortunate events, my use of Facebook and Google+ to consume information has increased.)

How will teens get their information then? Through Facebook’s fake email address? Maybe. Maybe not.

Twitter? Twitter has huge volume, and if you’re subscribing to a few hundred people, there’s a good chance that you won’t see every tweet from those people. You could set up lists, but Twitter doesn’t have the elegance of an advanced RSS reader that retains items until you act upon them, or of an email application that retains items until you act upon them.

Perhaps I have to rid myself of the idea that everything has to be reviewed. If I see a company’s tweet, that’s great. If I don’t, too bad. (I hope this doesn’t lead companies to retweet every hour to guarantee that I’ll see their content.)

What do you think of the future of information consumption?

My response to Matt Asay’s concerns about the “always on” nature of Google Glass

Matt Asay wrote a piece entitled Google Glass: Way Too Much Google For Its Own Good. Here’s a brief excerpt:

By constantly presenting Glass wearers with information, or the opportunity to get information, Google manages to over-deliver on its mission statement at a time when we actually rely on Google to filter out noise, rather than fill our lives with more noise. As I wrote in 2007, the secret to Google’s business model is to embrace the abundance of the Internet’s information overload but then remove the detritus and give me only what I want, when I want it, and serve up context-relevant advertising.

But by sticking a computer on my face, always on and always connected, Google has ruined this model by giving me far more than I want, all the time, and diminishing my control of the flow of Google-provided information.

That’s just a brief part of Asay’s piece. I encourage you to read the whole thing.

I offered a comment on the piece, which I am reproducing here in full.

I wonder if always-on connected devices will end up changing our expectations about connectivity.
Think about it. When I was growing up, my parents had a phone in our house, connected to the wall, that wasn’t going anywhere. When we were in the car, we couldn’t be reached on the phone. When I was at school, I couldn’t be reached on the phone. When my dad was at work, you could only reach him on the phone if you knew his work number; his home phone number would only reach my mom and her dog. When I went to college, you’d have to call a pay phone in my dorm to reach me.
At that time, there was no expectation of always being able to reach someone via the telephone. However, mobile phones slowly became more and more popular, and that phone that I would only use in emergencies slowly became a necessity. States such as California had to pass laws because some of us would answer our phones while we were driving down freeways.
In the same way, it’s quite possible that a few decades from now, we’ll require the “always on” technology, perhaps even when we sleep. Perhaps the person who DOESN’T wear his connected device in the shower will be looked upon as a backward Luddite freak.
Matt, I see what you’re saying, and you do have a valid concern, but it’s quite possible that society’s expectations could significantly change our behavior.

Bank is tough (Barbie, C. Maxine Most, and a van down by the river)

This post will talk about a plastic doll, an expert in international emerging technology market development, and a memorable image from a Chris Farley Saturday Night Live sketch. These three things are very different – C. Maxine Most is certainly not a Barbie doll, and I seriously doubt that she lives in a van down by the river. But before we get to Most’s thoughts on technology and solutions, let’s talk about something that Barbie used to say.

Over twenty years ago, Mattel created a talking Barbie doll that uttered 270 different phrases. After consumer objections, that list was trimmed to 269. The phrase that was removed from Barbie’s vocabulary was the phrase “Math class is tough.” At the time (and even today, 20+ years later) there was a concern that females were not being encouraged to pursue scientific careers; if Barbie (who, admittedly, is primarily a toy for girls) was saying that math class was tough, it was feared that the small number of women in the sciences could continue.

But that piece of plastic had a point – not just for women, but for everyone. Math class is tough. So we develop all sorts of learning methods, as well as all sorts of technologies, to make math class easier.

pocket-439862166_1bb42eff1e

(picture source, license)

I was in one of the first generations of students who had calculators in math class. While there are arguments that calculators serve to atrophy our ability to perform calculations ourselves, the calculators do allow us to concentrate on learning mathematical theories. The calculator is not a way of life (as I have often said, “a tool is not a way of life”), but is just one component that solves a problem for someone.

The point about pursuing solutions, rather than tools, was emphasized in an early morning presentation by C. Maxine Most of Acuity Market Intelligence. The presentation was part of a webinar sponsored by findBIOMETRICS. Most, along with Frost & Sullivan’s Brent Iadarola, was speaking on the topic of “The Future of Mobile ID – Mobile ID Industry Update.” (DISCLOSURE: the webinar was sponsored by several biometric companies, including my own employer.) Most made the point about solutions by talking about her first impressions of the iPhone. From Most’s point of view, smartphones were tough before the iPhone. It was hard to get applications for the smartphone, and it was hard to use applications for the smartphone. However, when the iPhone arrived, it was designed to be used by a person, and it was easy to get apps, and it was easy to use apps. Once smartphones were no longer tough, they could be applied to solutions.

Solutions such as mobile banking. “Bank” is another thing that is tough. But if you look at what has happened in the banking industry over the last several decades, you can see that many things have happened to make banking much easier.

A few decades ago, you had to go to the bank on a weekday, and “banking hours” were a synonym for “not all that often.” Now you have ATMs, and banks in grocery stores, and the ability to scan checks with your mobile phone to deposit them.

Even when ATMs were introduced, ATMs were tough. You had to put your account number on the checks you wanted to deposit, and you had to fill out the deposit envelope, and you had to insert your bank card, and you had to type in your personal identification number (PIN). Today – if you don’t avoid the ATM altogether by having your checks deposited electronically – you can throw your checks into the ATM machine without an account number, signature, or deposit envelope. As as for the bank card and the PIN – why do you think that my employer and our competitors were sponsoring this little chat?

Now we’ve even removed the next “hard” thing – the need to go to a bank, or to an ATM, or even to your home computer, to conduct banking. I alluded to mobile deposits earlier – as long as you have a cell signal, you can be anywhere – even in a van down by the river – and you can deposit checks to your heart’s content. You can also pay bills, transfer funds, and do all sorts of stuff that would require a visit to a teller a few decades ago.

As I previously mentioned, there are a number of banks in the Fortune 100. They’ve embraced a number of technological changes over the years, all in the name of making banking easier. But if changes continue to occur at more rapid rates, it’s quite possible that the bank customer in 2023 will laugh at the days when you had to memorize PINs and carry cards to perform banking.

Before the findBIOMETRICS webinar began, the hosts conducted a quick survey to ask the 300+ participants when they thought passwords would go away. Some people, such as myself, thought they would be around for ten years or more. The majority of respondents, however, thought that passwords would disappear much sooner than that.

There is one thing that I am certain of, however – my predictions for the future, and Acuity Market Intelligence’s predictions for the future, and Frost & Sullivan’s predictions for the future will all end up being somewhat wrong. Maybe banks will be in trees. Maybe banks will be owned by oil companies. Who knows?

Post Navigation