Artificial Intelligence is a true science, not science fiction. Our culture of science fiction paints AI as robots or over-lording computers which permits the current science to examine what ethical boundaries can be used to shape the AI research and development standards so the outcome does not mirror the negative outcomes predicted. To review the core of AI: it is the proven capability of thinking man-made systems to learn technique and skills to react, compose, or provide suppositions on different scenarios. It will and should be the technology that permits the lowly human race to adventure beyond our near space, being able to provide analytical, navigational, reactionary, and vital survival data to our collective success. It has been adapted to precision manufacturing. It has been analyzed and employed (as most discoveries of the late 20th century) for use within the military or security spheres.
As an example, space exploration mechanisms will utilize the data input of all the previous missions. There will be a base of all the known properties of our world (and the near space we have reached), our understanding of atmospherics, gravity, and all the known properties of physics. Discoveries of other information, as they occur, will be synthesized into the AI used to assist in hypothesizing a goal.
We’ve tested very focused forms of AI, Deep Blue (IBM) learned to play chess to the level of grand master, but it lost in real tournament competition against a human. Now that base, once created on a shared set of servers that represent the average mid-size company’s data-center, can be mimicked in a much smaller and compact manner. IBM also developed Watson for the game specific context battle of the popular televised game show “JEOPARDY.” In summary, the requirements for the AI included language with all its technical nuances, a vast knowledge base and colloquial expression base, interpretation of answers and responses as questions. Watson won in the variable conditions of the televised games, and what is apparent by the outcome is that Watson didn’t walk over his human opponents and fully control the direction of the game.
AI doesn’t acquire knowledge in the same manner as we do, but given a premise of base rules, it should be able to examine and grow both its data collection parameters and its validation methods. Just like a kid learns what “hot” is and learns not to touch something hot unless he or she wants to get hurt, AI needs to learn what anomalies exist in the world of its consumed data and whether or not the anomalies will amount to something worth examining.
There’s been a claim that an AI subsystem based on analysis of social media content (multiple sources) and has predicted the outcome of Presidential elections in recent past and accurately picked the convention nominees for both parties. The claim is that the system is able to extract substance from repetition, and discard rants. It would be interesting to test the accuracy of similar democratic selection processes elsewhere; there have been elections in the UK and Australia, both being commonwealth, and both having two basic parties that represent the voting leadership. Contrary to popular belief, not everyone in America has the means or time to vocalize every opinion they may have – so that’s a big data fault. For any systems that proclaim to predict, they are fed volumes of information and by speedy analysis learn to model the factors that contribute to outcome. Considering that the historical information of their learning may be finite or incompatible with real current data (under modern measurement) there will be an error margin. Reflecting back on a very old adage of computing … Garbage In, Garbage Out.
The GIGO factor and the hack factors are what concerns most of the tech fore-runners as they consider AI.
AI scares us because it can process data faster than we can; actually, within the parameters of a task the result is probably a tie, but we are restricted in our biological form to articulate or act upon it. We have begun to develop vehicles that can mitigate traffic situations, but to what level are they being judged for performance under learned conditions. Even with the latest mapping and infrastructure information, could an autonomous vehicle properly face the decision of making speed adjustments to extend its fuel consumption or will it take the twenty-mile detour for refueling before returning to a pre-ordained course. We know what the new warning lights are to indicate low fuel, and react knowing that moving at 35mph on a highway zoned for 60mph is dangerous. That is, however, a practical example of how the AI core of autonomous vehicles would be developed. The disadvantage of autonomous vehicles powered by AI systems is that they ideally will only work when ALL the vehicles they share the environment with are autonomous. Likewise the assembly line robot that spot welds 2400 units over a 24 hour cycle still reports its production data to a central location, when its production dips or rises, that’s a sign to other systems (humans or AI) to understand why. The same process scenario pared down to output based on purely human activity has a failure ratio also; it’ll be important for AI to accept a near zero tolerance for error in any of its outcomes.
Just as people learn, so does AI within the environment that contains it. Experience improves our relationship within the world (the learning curve), and experience often thwarts us in our memory of failure and harm. AI must experience errors and their source before it can self-correct and take the next step toward becoming a true sentient entity of tomorrow’s science fact.