Hi Hi Hi,
Di postingan ini gw akan nge-share artikel yang dibuat oleh salah seorang mahasiswa/i gw mengenai Artificial Intelligence. Postingan ini salah satu bentuk apresiasi gw atas pemikiran mereka. Nama pembuat ada di judul postingan.
When I visited Kata.ai Headquarters this summer, I remembered one of their co-founder said the intent of using artificial intelligence is to relieve humans for doing mundane, meaningless tasks. Hence, humans can focus more on doing meaningful things. Their choice of example was, of course, the implementation of chatbots in online marketplaces. Chatbots are used for profiling, so that customer service staffs don’t have to repeatedly ask for each customer’s personal information and purpose of contact. Interestingly, the chatbots have their own personas and all of them are depicted as women: Telkomsel’s Veronika and Unilever’s Jemma, to name a few. Based on their research, the female personas are preferred due to the ideas of female qualities: empathetic, friendly, and other things that indicates women are more emotionally expressive than men. This entails to the creation of artificial intelligent agents to become emotionally reciprocating: how such robots understand emotions?
Once I skimmed a paper by Rosalind W. Picard about affective computing, in which Picard (1995) said, “emotions are vital for us to function as rational decision-making humans”. Picard also assured that no one can read your mind, yet physiological responses during expression of emotions can be measured and help others guess your emotional state. For example, we see someone crying and it would sound rational if we choose to hug them, bring them tissues, or say consoling words. Just like an agent works, we observe, measure the problem, generate solutions and decide an action that will bring the best for the environment’s betterment.
Then, wouldn’t it be so amazing that an agent knows you’re mad as hell because you typed in capital letters and perhaps slipped in some insults? Or that when it knows you’re sad due to your short replies and repetitive word of choice? For people in Kata.ai, the hardest part to train is sarcasm. Understanding sarcasm is contextual, so the agent should learn the whole conversation’s mood. After the agent is getting a grasp of what’s going on, it must decide which course of action that would bring said betterment to the engaged human.
On the human side, for example, we would unconsciously idealize chatbot Veronika as ‘human’ Veronika with female human qualities we expect from a professional customer service: empathetic and helpful. Through the actuators, the agent must reflect those qualities. Since it’s a chatbot, it might be conveyed in choice of words, usage of icebreakers, or timing. As it was said, at least in this case, chatbots are used to eliminate the boring parts of a customer service’s job. The Kata.ai co-founder emphasized that AI will not eradicate human labor, since the existence of human connection is very essential. Personally, I’ve never contacted Veronika, but I guess when she’s out of ideas to offer solutions from the FAQ list, she would connect the customer (finally) with a human customer service staff.
In my opinion and talking from experience chatting few chatbots, the hardest image to get out of is the thing that we know that we’re talking to a chatbot. Like Picard said, our emotions affect our thinking and decision making: it might not be interesting anymore since we know we’re talking to a program. Hence, our uninterested responses. It’s the challenge for the chatbot designers to make their creations to be as natural and relatable as possible. By establishing the mental-emotional connection before engagement, it would enhance the experience for the human side. Hopefully, by reducing judgements and negativity beforehand, the interaction would flow more naturally and the agent could gain deeper insights from the human encounter for its own learning needs.
What I try to focus here is designing the human side of AI is as important. Starting with the idea of personas may diffuse people’s view of perfect, robotic, black-and-white thinking kind of AI agent to something friendlier and (seemingly) capable of understanding and mimic emotions; something that focuses on both semantics and syntaxes.
If a computer has the capability of being effectively affective, it would enhance the quality and smooth the edges of human-computer interactions. Who knows, maybe it would also lead to more meaningful interactions, just like the Kata.ai co-founder said.
Picard, R.W., 1995. Affective computing.
Nice article Laras. Keep the good work!
See you di postingan selanjutnya.