Posted on April 14, 2016 in BOTS

Apparently text-based UIs are making a come-back. Supposedly as a backlash to complexity and the overload of “apps” – but what’s coming back isn’t terminal mode – so put away that vt-220 you’ve had squirreled away in the attic in case you’re suddenly teleported back to 1987 and forced to program in VAX Pascal again in your college terminal room.

No, the old-new hotness is text-chat with Agents, er, I mean “Bots.” Yeah… basically an updated take on the text chat-bots that we’ve all come to hate when they annoy us when we’re just trying to re-up our car insurance, or browsing some e-commerce site.

So, what’s really going on here?

First a little of the chat-bot back-story…

Chatbots – or just ‘bots – have been around for decades. They got their start almost simultaneously in two staples of the early Internet: MUDs/, MOOs, MUSHes and Internet Relay Chat or “IRC” networks. The first is pretty well-known to online adventure and role-play fans; here ‘bots were created as guides and other non-player type characters to interact with people in these online virtual worlds. They continued in more graphical forms in MMOGs like WoW of course, but interacting with a real chat-bot in something like LambdaMOO or [TinyMUD(https://en.wikipedia.org/wiki/TinyMUD)] was much, much more fun. More clever repartee, less hack & slash.

IRC is the granddaddy of distributed Internet chat and bots have been around there longer than almost anywhere else. IRC bots are pretty much everywhere in the programmer oriented universe of IRC. They maintain channels, kick off bad-actors, and perform basic info tasks like take note of continuous server (CI) activities and code repository updates; perform Google searches and weather looks ups and more. On newer systems like Slack – which is really a souped up IRC service with an excellently implemented UI – bots are often a way to answer in-channel questions or help ‘noobs get settled into the swing of a new corporate culture.

These bots have been traditionally been very limited in their capabilities because writing anything other than simple pattern-response type systems is really, really hard. A well written bot is a limited form of expert system. Actually capturing knowledge in expert systems (as the CYC project showed) — is really hard and ultimately incredibly brittle. There are just so many edge cases in any body of knowledge and the bigger the rule set the slower the system. Although, there have been some really interesting attempts at making chatbots more user friendly – One of the most common uses a system called A.L.I.C.E. which is bot development environment written in AIML (the AI Markup Language). It’s at the root of many of the chat-bots used in e-commerce systems today, but it turns out attempts at having a natural language conversation with a scripted system gets tedious (for the humans) very very quickly.

So, what’s the deal? Why ‘bots? Why text? Why now? In a nutshell, practically infinite storage and almost free processing power has made deep learning cost-effective to deploy. That ability to throw almost infinite computing resources at a problem has finally allowed analysis of both background data (think: inventories, traffic data, stocks, airline flight availability, restaurant reviews, and weather just to name a few) and user activity data (think: every single thing you do on-line and every non-cash transaction you ever do…) and turn into something brand new: Deep learning ecosystems.

Once you have the ability to create effective infinitely deep, self-organizing pools of deeply searchable info, you no longer have the structural limitations of previous agent systems … and with the computing power available you also have the ability to personalize all of this in real-time.

Add this to impressive advances in natural language processing and we have the ability to deliver just-in-time utilities that deliver information/services inside the flow of other interactions like chat/messaging that feel a lot more like you’re talking to another human. This is why these bots are being delivered inside the context of existing messaging systems like WhatsApp, Telegram and Facebook Messenger. It’s a nice constrained environment where the humans already converse with each other in short, targeted interactions.

Sounds great; so, what’s the catch?

Well, like most things on the internet – the catch is that these are products of companies whose sole interest is in making a profit. The cynical-but-true assertion that “when the product is free, the product is you” has been a staple of life on the Internet for a long time. Facebook is free because they sell ads based on your personal relationships and things you say to your friends/contacts. Google is free because they sell ads based on your search history, and even the contents of your email if you’re a Gmail user. The list is endless. As long as you understand and agree to this tradeoff – and you are cognizant of the risks to your personal transactional information – life is, if not “good,” pretty darned convenient, and we’re willing to live with the trade-off even if it’s unsettling if we think abut it too hard.

However, Chatbots as they go from the simple service oriented Q&A (e.g., “Siri, what’s the weather in San Francisco today?” or “Alexa, buy me some more paper towels and 4 boxes of mallomars!”) to the deeply conversational the dynamic gets a lot more interesting… Suddenly we’re living inside the storylines of SciFi film AI like Her, AI: The Artificial Intelligence or (hopefully not!) Ex Machina.

The upside is ‘whoah! …way cool tech, maaaan! We’re living in the future!’ The downside is the lack of ownership of your relationship with these things. You are (once again) being mined for your value as a consumer. In other words: Web4.0 is “Product: You” but with AI that can out-predict you and, in fact, out-think you. Oh, and there will be hundreds of companies buying and selling info about you and your family as you interact with these things to reinforce more and better models to do this.

Now, of course the ‘bots envisioned by Facebook and others are the first generation of deep-learning powered marketing proxies – it’s a way to have always-on order-takers and learn more and more about customer tastes/preferences (and a way to not pay actual humans, by the way… just saying’…). There will probably be lots of missteps and annoyances as really badly written bots annoy the hell out of people for the first year or so.

But as these things get smarter, the big question no one is yet addressing is: “What happens when these agents get really personal?” What happens when people start to confide in these bots, even inadvertently? What if you tell a bot you’re really depressed? What if tell a bot you find your co-worker attractive? What if these corporate AIs infer this (and more) from long term analysis of your conversations and interactions? Will they sell you out to a)drug companies, b) Google/Facebook/Tinder/Match, c) the HR department, d) The Man or e) All of the Above …? It sort of begs the question: who is looking out for you?

Wanted: My Majordomo

Even if these 1st gen bots operate flawlessly, there will still be a need for a bot (or any army of bots) that are owned by individuals and whose sole job is to run interference for real people. In effect to dis-intermediate the marketing bot-army and act solely for, and in the interest of individuals. These bots may also act as intelligent gatekeepers to content and even sentinels for things like firewalls and privacy systems to keep out intruders (think: black-hat hackers, businesses and — yes — even governments) out of your stuff.

What form will this take? I am pretty sure its first incarnation will be a stand-alone appliance (I am working on one). Its next generation may well be a bonded service that comes with service level agreements (SLAs) and has the ability to adapt as corporate bots get smarter and more crafty in their attempts to get to the users these “majordomo bots” protect.

These things will happen… the only questions are: Will you get to control the use of your online data? Or,will big businesses use their clout to try to stop you from maintaining your rights as an individual and reduce you to just some kind of online making target..?

In retrospect, I could have titled this post “Chatbots – Threat or Menace?” I could have — but we’re not there yet. This technology is new and it’s not clear where it’s going yet. And as a developer of these technologies myself I am not really as down on all this as this coda might seem… but given the never ending quest for new markets and more profits, it’s a fair question and a debate that needs to happen.

We all (I do it too) get wowed very easily by shiny new technology. It’s cool stuff. Chat bots connected to these deep learning system represent a huge potential new leap in what we can do with our technology, a lot of it really good; but with any new leap we need to be proactive on the side of managing downside risks.. and not just for businesses but for ourselves.