-
直到現在,我們與機器的溝通仍局限於有意識和直接的模式,不論是一些簡單的事,如用開關開燈,或甚至是用程式來控制機械人這類複雜的事,我們都要給機器輸入一個、甚至一系列的指令,才能讓它執行一些動作。相反的,人與人的溝通複雜多了,也有趣多了。因為我們會考慮到一些言語未明確表達的言外之意,我們會觀察表情、肢體語言,在彼此交談中,我們會用直覺來感受對方的感覺和情緒,這些都是我們做決定過程中一些重要的因素。我們的願景是,將這個全新的人類互動領域引進人機互動的領域,這麼一來,電腦不只可以明白你指示它所做的事,也會對臉部表情和情緒感受作出反應。要做到這一點,沒有比解譯從大腦這個控制及體驗中樞自然產生的信號更好的方法了。
嗯,這聽起來好像是個很棒的主意,但正如Bruno所說,這個任務並不容易,主要原因有兩個。第一是大腦對演算法的偵察,我們的大腦是由數十億個活躍的神經元組成,如果把神經軸突連接在一起,長度大約有十七萬公里,當這些神經元互動時,可以測量到這些化學作用所發射出的電脈衝。具功能性的大腦大多分佈在大腦表層,為了增加這個可進行心智功能部位的表面積,大腦表面有非常多的皺褶,這些大腦皮質層的褶皺,對分析大腦表層的電脈衝來說是個很大的挑戰。每個人大腦皮質層的褶皺都不同,就像指紋一樣,所以即使電脈衝訊息可能來自大腦中相同的功能性區域,但當這些結構形成皺摺時,在每個人大腦中的實際位置都大不相同,即使在同卵雙胞胎身上也不同,大腦皮質層的電脈衝訊息不再有一致性。
我們的突破是,建立一個能展開大腦皮質層的演算法,在更接近源頭的地方勘測這些信號,使它能更廣泛地運用在大眾身上。第二項挑戰是觀察腦電波的儀器,腦電波測量基本上包括一個裝置了許多感應器的髮網,就像圖中所看到的,技術人員會把電極用導電凝膠或貼布固定在頭皮上,這個準備程序通常需先除去部分的頭髮,這整個程序既費時又令人不舒服。除此之外,這些系統得耗費數萬美金。
現在,我想邀請一位去年的演講者Evan Grant上台,他很好心地答應幫忙示範我們所開發的儀器。
展開英文
收合英文
-
以下為系統擷取之英文原文
Up until now, our communication with machines has always been limited to conscious and direct forms. Whether it's something simple like turning on the lights with a switch, or even as complex as programming robotics, we have always had to give a command to a machine, or even a series of commands, in order for it to do something for us. Communication between people on the other hand, is far more complex and a lot more interesting, because we take into account so much more than what is explicitly expressed. We observe facial expressions, body language, and we can intuit feelings and emotions from our dialogue with one another. This actually forms a large part of our decision-making process. Our vision is to introduce this whole new realm of human interaction into human-computer interaction, so that computers can understand not only what you direct it to do, but it can also respond to your facial expressions and emotional experiences. And what better way to do this than by interpreting the signals naturally produced by our brain, our center for control and experience.
Well, it sounds like a pretty good idea, but this task, as Bruno mentioned, isn't an easy one for two main reasons: First, the detection algorithms. Our brain is made up of billions of active neurons, around 170,000 km of combined axon length. When these neurons interact, the chemical reaction emits an electrical impulse which can be measured. The majority of our functional brain is distributed over the outer surface layer of the brain. And to increase the area that's available for mental capacity, the brain surface is highly folded. Now this cortical folding presents a significant challenge for interpreting surface electrical impulses. Each individual's cortex is folded differently, very much like a fingerprint. So even though a signal may come from the same functional part of the brain, by the time the structure has been folded, its physical location is very different between individuals, even identical twins. There is no longer any consistency in the surface signals.
Our breakthrough was to create an algorithm that unfolds the cortex, so that we can map the signals closer to its source, and therefore making it capable of working across a mass population. The second challenge is the actual device for observing brainwaves. EEG measurements typically involve a hairnet with an array of sensors, like the one that you can see here in the photo. A technician will put the electrodes onto the scalp using a conductive gel or paste and usually after a procedure of preparing the scalp by light abrasion. Now this is quite time consuming and isn't the most comfortable process. And on top of that, these systems actually cost in the tens of thousands of dollars.
So with that, I'd like to invite onstage Evan Grant, who is one of last year's speakers, who's kindly agreed to help me to demonstrate what we've been able to develop.
-
你們所看到的儀器是有十四個頻道、高傳真的腦電波訊號,擷取系統不需要任何頭皮準備程序,不須使用導電凝膠或貼布,戴上它,等訊號穩定,只要幾分鐘時間。而且它是無線的,讓你活動自如,比起那些幾萬美元的傳統腦電波測量系統,這個頭戴式裝置只需幾百美金。現在來談談大腦感應演算法。所以,臉部表情,就像我之前提過的情緒體驗,這套系統有令人意想不到的設計,只要做一些敏感度調整,就可以運用於個人化使用,但因為時間的關係,現在我只示範一套認知系統,這套系統能讓你用意念移動虛擬物體。
Evan是第一次使用這個系統,因此我們首先要為他建立一個新的個人檔案。他顯然不是Joanne,所以我們選擇「增加使用者」。Evan,好了,首先需要做的是練習發出一個一般的訊號,一般就是指Evan不需做什麼特別的事,他只要放鬆就好,這個概念在於建立一個基準線,或是大腦的正常狀態。因為每個人的大腦都不相同,需要八秒的時間,完成了。我們可以選擇一個動態的行為,Evan,你可以選擇一個可以在腦海中清楚看到的事物。
Evan Grant:讓我們做一個「拉」的動作。
Tan Le:好,點選「拉」。現在,這裡的概念是,Evan需要想像一個物體向著螢幕前方移動,他這麼做的時候,螢幕上會出現一個滑動的進度軸,第一次不會發生任何變化,因為系統不知道他怎麼想像「拉」的動作,但在這八秒中持續想著這個念頭,一、二、三、開始。好了,當我們按了接受之後,這個方塊就活了起來,讓我們看看Evan能否真的能嘗試想像「拉」的動作。哇!非常好!
展開英文
收合英文
-
So the device that you see is a 14-channel, high-fidelity EEG acquisition system. It doesn't require any scalp preparation, no conductive gel or paste. It only takes a few minutes to put on and for the signals to settle. It's also wireless, so it gives you the freedom to move around. And compared to the tens of thousands of dollars for a traditional EEG system, this headset only costs a few hundred dollars. Now on to the detection algorithms. So facial expressions -- as I mentioned before in emotional experiences -- are actually designed to work out of the box with some sensitivity adjustments available for personalization. But with the limited time we have available, I'd like to show you the cognitive suite, which is the ability for you to basically move virtual objects with your mind.
Now, Evan is new to this system, so what we have to do first is create a new profile for him. He's obviously not Joanne -- so we'll "add user." Evan. Okay. So the first thing we need to do with the cognitive suite is to start with training a neutral signal. With neutral, there's nothing in particular that Evan needs to do. He just hangs out. He's relaxed. And the idea is to establish a baseline or normal state for his brain, because every brain is different. It takes eight seconds to do this. And now that that's done, we can choose a movement-based action. So Evan choose something that you can visualize clearly in your mind.
Evan Grant: Let's do "pull."
Tan Le: Okay. So let's choose "pull." So the idea here now is that Evan needs to imagine the object coming forward into the screen. And there's a progress bar that will scroll across the screen while he's doing that. The first time, nothing will happen, because the system has no idea how he thinks about "pull." But maintain that thought for the entire duration of the eight seconds. So: one, two, three, go. Okay. So once we accept this, the cube is live. So let's see if Evan can actually try and imagine pulling. Ah, good job!
-
真令人驚訝!(掌聲)我們還有一些時間,我要請Evan做一些比較困難的動作,這個有點難,因為這是關於想像某種不存在於真實世界的事物,就是「消失」,所以你希望-就動態的動作而言,因為你經常這麼做,所以能想像,但沒有任何類似「消失」的動作。Evan,現在你要做的是想像這個方塊慢慢消失,好的,同樣的練習,一、二、三,開始,好,我們試試吧!喔,天啊!他真的太厲害了,再試一次。
腦電波儀器:失去專注力
(笑聲)
TL:但我們可以看到,這套系統確實可行,雖然只維持一段很短的時間,我認為想像「消失」真的非常困難,這個系統了不起的地方是,我們只給這套軟體一次機會,瞭解Evan是怎麼想像「消失」的,而這部機器藉此學會了演算它。
(掌聲)
謝謝,太棒了!太棒了!
(掌聲)
謝謝,Evan,你真是這項科技最佳的展示人員。
展開英文
收合英文
-
That's pretty amazing.
(Applause)
So we have a little bit of time available, so I'm going to ask Evan to do a really difficult task. And this one is difficult because it's all about being able to visualize something that doesn't exist in our physical world. This is "disappear." So what you want -- at least with movement-based actions, we do that all the time, so you can visualize it. But with "disappear," there's really no analogies. So Evan, what you want to do here is to imagine the cube slowly fading out, okay. Same sort of drill. So: one, two, three, go. Okay. Let's try that. Oh, my goodness. He's just too good. Let's try that again.
EG: Losing concentration.
(Laughter)
TL: But we can see that it actually works, even though you can only hold it for a little bit of time. As I said, it's a very difficult process to imagine this. And the great thing about it is that we've only given the software one instance of how he thinks about "disappear." As there is a machine learning algorithm in this --
(Applause)
Thank you. Good job. Good job.
(Applause)
Thank you, Evan, you're a wonderful, wonderful example of the technology.
-
正如你們之前所見,這套軟體內建一個基準測量系統,因此隨著Evan或其他使用者,對這系統越熟悉,就能不斷地增加更多的檢測項目,這個系統就能開始區分不同想法之間的差異。當你訓練做這些檢測項目,這些想法就能傳送或連結到任何電腦平台、應用程式或設備上。
讓我為你們展示幾個例子。這個新界面有很多可運用的應用程式,例如在遊戲或虛擬世界中,你可以用臉部表情,自然、直覺地操控遊戲角色或虛擬人物。顯然地,你將能親身體驗幻想的魔力和運用意念來控制世界,顏色、燈光、聲音和音效也可以動態地反映出你的情緒狀態,並即時強化你的感受。現在來看看由世界各地的研發人員開發的一些應用程式,例如,機器人和簡單的機器,這個例子是操作玩具直昇機,只要用意念就可以讓它飛起來。
這項科技也可以應用在實際生活中,這是一個智慧居家的例子,由使用者界面控制系統來打開或關上窗簾,當然電燈也可以打開或關上。最後是真正應用於改善日常生活,例如能夠控制電動輪椅,在這個例子裡,臉部表情對應於移動方向的指令。
男聲: 現在眨右眼右轉,眨左眼左轉,微笑往前直行。
TL: 我們真的-謝謝
(掌聲)
現今我們所做到的只是很小的一部分,藉由研發團隊的投入,以及全世界研發者和研究人員的參與,我們希望你們能幫助我們,讓這項科技由這個起點開始順利發展。謝謝。
展開英文
收合英文
-
So as you can see before, there is a leveling system built into this software so that as Evan, or any user, becomes more familiar with the system, they can continue to add more and more detections, so that the system begins to differentiate between different distinct thoughts. And once you've trained up the detections, these thoughts can be assigned or mapped to any computing platform, application or device.
So I'd like to show you a few examples, because there are many possible applications for this new interface. In games and virtual worlds, for example, your facial expressions can naturally and intuitively be used to control an avatar or virtual character. Obviously, you can experience the fantasy of magic and control the world with your mind. And also, colors, lighting, sound and effects, can dynamically respond to your emotional state to heighten the experience that you're having, in real time. And moving on to some applications developed by developers and researchers around the world, with robots and simple machines, for example -- in this case, flying a toy helicopter simply by thinking lift with your mind.
The technology can also be applied to real world applications -- in this example, a smart home. You know, from the user interface of the control system to opening curtains or closing curtains. And of course also to the lighting -- turning them on or off. And finally, to real life-changing applications such as being able to control an electric wheelchair. In this example, facial expressions are mapped to the movement commands.
Man: Now blink right to go right. Now blink left to turn back left. Now smile to go straight.
TL: We really -- Thank you.
(Applause)
We are really only scratching the surface of what is possible today. And with the community's input, and also with the involvement of developers and researchers from around the world, we hope you can help us to shape where the technology goes from here. Thank you so much.