Companies are developing software to analyze our fleeting facial expressions and to get at the emotions behind them.
Does it look Orwellian? it surly has the potential for misuse (by NSA for example?! "oh, we do not need to worry about NSA trying to misuse the technology", you may say). 😉
Though such algorithms also have the potentials for being used for good, let’s say, to detect depression (or other mood disorders) in early stages through evaluation of microexpressions. Or even it could be used to see if the treatment process of mood disorders is successful or slow.
Another complex aspect would be if it finds its way to the courtrooms just as the brains will go on trial (co-incidentally, NYTimes has another piece on this aspect http://www.nytimes.com/2013/09/18/nyregion/the-day-when-neurons-go-on-trial.html?smid=tw-share&_r=0). But is it good to use it , let’s say, to test whether the witness feels compassion for the defendant? or even whether the prosecutor harbors negative emotions toward the accused, while he/she is asking for certain punishments? The answer seems to be very complicted. It is neither a straight NO nor a solid YES.
Like many other tools, there are many ways that we can use this sort of technology to improve things rather than to misuse them. It is us as the members of the society which will set the tone of how we are treated by the elements of societal interaction. I belive that scientists and innovators should not leave these aspects off to the external affairs. Rather, they should engage with others such as lawmakers, data-handlers and etc, to make sure that collectively we can shape our society for better.
See on www.nytimes.com