Fit AI With ‘Ethical Black Box’: Researchers Emphasize The Need For Safety

As much as we like to think it, we are indeed already living in the future, and have come to a point where robots walk among us, helping us with our everyday activities. As new AI is making its way across numerous sectors and arrays, the need to better equip them with the technology to make better decisions is also on the rise. With the recent report of a security AI called the K5 falling into a water fountain in Washington, researchers have now begun a debate as to whether or not these androids should be fit with black boxes that help them depict human emotions on fear and logic, to help them make better decisions.

In a conference that took place at the University of Surrey, numerous researchers came forward to discuss this need and pose the pertinent questions of safety for these machines, not just for themselves, but for the people around them as well. Since these bots are not controlled by the humans who make them, they on some level already have their autonomy while performing their actions. But to make it so that they don’t cause harm and inconvenience to the people around them, these AI need to be a lot smarter.

Self-driving cars are a good example of this and show the need that is currently prevalent. After Tesla Motors had started putting out their self-driving cars, numerous accidents were reported as a result of the inconsistencies with the AI that was being used. The AI began to malfunction and was not able to properly adapt to the variable surroundings, which is where researchers think an ‘ethical black box’ would come in handy. However, a factor to consider is the consequence of having AI with human emotion which is autonomous, which would then pose a much greater threat to humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *