Information Technology
What do you mean by ethics for artificial intelligence? Well, that's a good starting point because there are different sorts of questions that are being asked in the books I'm looking at. One of the kinds of questions we're asking is: what sorts of ethical issues is artificial intelligence, AI, going to bring? AI encompasses so many different applications that it could raise a really wide variety of different questions. For instance, what's going to happen to the workforce if AI makes lots of people redundant? That raises ethical issues because it affects people's wellbeing and employment. There are also questions about whether we somehow need to build ethics into the sorts of decisions that AI devices are making on our behalf, especially as AI becomes more autonomous and more powerful. For example, one question which is debated a lot at the moment is: what sorts of decisions should be programmed into autonomous vehicles? If they come to a decision where they're going to have to crash one way or the other, or kill somebody, or kill the driver, what sort of ethics might go into that? But there are also ethical questions about AI in medicine. This might be useful since it seems people may sometimes open up more freely online. But, obviously, there are going to be ethical issues in how you're going to respond to someone saying that they're going to kill themselves, or something along those lines. There are various ethical issues about how you program that in. "AI is pushing us to the limits of various questions about what it is to be a human in the world" I suppose work in AI can be divided into whether you're talking about the sorts of issues which we're facing now or in the very near future.