driven machines can be fooled, warn IISc researchers | AITopics
Machine-learning and artificial intelligence algorithms used in sophisticated applications such as for autonomous cars are not foolproof and can be easily manipulated by introducing errors, Indian Institute of Science (IISc) researchers have warned. Machine-learning and AI software are trained with initial sets of data such as images of cats and it learns to identify feline images as more such data are fed. A common example is Google throwing up better results as more people search for the same information. Use of AI applications is becoming mainstream in areas such as healthcare, payments processing, deploying drones to monitor crowds, and for facial recognition in offices and airports. "If your data input is not clear and vetted, the AI machine could throw up surprising results and that could end up being hazardous.