Autonomous killing machines won't look like the Terminator...and that is why they are so scary
30 Jul 2015Just a few days ago many of the most incredible minds in science and technology urged governments to avoid using artificial intelligence to create autonomous killing machines. One thing that always happens when such a warning is put into place is you see the inevitable Terminator picture:
The reality is that robots that walk and talk are getting better but still have a ways to go:
Does this mean that I think all those really smart people are silly for making this plea about AI now though? No, I think they are probably just in time.
The reason is that the first autonomous killing machines will definitely not look anything like the Terminator. They will more likely than not be drones, that are already in widespread use by the military, and will soon be flying over our heads delivering Amazon products.
I also think that when people think about “artificial intelligence” they also think about robots that can mimic the behaviors of a human being, including the ability to talk, hold a conversation, or pass the Turing test. But it turns out that the “artificial intelligence” you would need to create an automated killing system is much much simpler than that and is mostly some basic data science. The things you would need are:
- A drone with the ability to fly on its own
- The ability to make decisions about what people to target
- The ability to find those people and attack them
The first issue, being able to fly on autopilot, is something that has existed for a while. You have probably flown on a plane that has used autopilot for at least some of the flight. I won’t get into the details on this one because I think it is the least interesting - it has been around a while and we didn’t get the dire warnings about autonomous agents.
The second issue, about deciding which people to target is already in existence as well. We have already seen programs like PRISM and others that collect individual level metadata and presumably use those to make predictions. We have already seen programs like PRISM and others that collect individual level metadata and presumably use those to make predictions. While the true and false positive rates are probably messed up by the fact that there are very very few “true positives” these programs are being developed and even relatively simple statistical models can be used to build a predictor - even if those don’t work.
The second issue is being able to find people to attack them. This is where the real “artificial intelligence” comes in to play. But it isn’t artificial intelligence like you might think about. It could be just as simple as having the drone fly around and take people’s pictures. Then we could use those pictures to match up with the people identified through metadata and attack them. Facebook has a [Just a few days ago many of the most incredible minds in science and technology urged governments to avoid using artificial intelligence to create autonomous killing machines. One thing that always happens when such a warning is put into place is you see the inevitable Terminator picture:
The reality is that robots that walk and talk are getting better but still have a ways to go:
Does this mean that I think all those really smart people are silly for making this plea about AI now though? No, I think they are probably just in time.
The reason is that the first autonomous killing machines will definitely not look anything like the Terminator. They will more likely than not be drones, that are already in widespread use by the military, and will soon be flying over our heads delivering Amazon products.
I also think that when people think about “artificial intelligence” they also think about robots that can mimic the behaviors of a human being, including the ability to talk, hold a conversation, or pass the Turing test. But it turns out that the “artificial intelligence” you would need to create an automated killing system is much much simpler than that and is mostly some basic data science. The things you would need are:
- A drone with the ability to fly on its own
- The ability to make decisions about what people to target
- The ability to find those people and attack them
The first issue, being able to fly on autopilot, is something that has existed for a while. You have probably flown on a plane that has used autopilot for at least some of the flight. I won’t get into the details on this one because I think it is the least interesting - it has been around a while and we didn’t get the dire warnings about autonomous agents.
The second issue, about deciding which people to target is already in existence as well. We have already seen programs like PRISM and others that collect individual level metadata and presumably use those to make predictions. We have already seen programs like PRISM and others that collect individual level metadata and presumably use those to make predictions. While the true and false positive rates are probably messed up by the fact that there are very very few “true positives” these programs are being developed and even relatively simple statistical models can be used to build a predictor - even if those don’t work.
The second issue is being able to find people to attack them. This is where the real “artificial intelligence” comes in to play. But it isn’t artificial intelligence like you might think about. It could be just as simple as having the drone fly around and take people’s pictures. Then we could use those pictures to match up with the people identified through metadata and attack them. Facebook has a](file:///Users/jtleek/Downloads/deepface.pdf) that demonstrates an algorithm that can identify people with near human level accuracy. This approach is based on something called deep neural nets, which sounds very intimidating, but is actually just a set of nested nonlinear logistic regression models. These models have gotten very good because (a) we are getting better at fitting them mathematically and computationally but mostly (b) we have much more data to train them with than we ever did before. The speed that this part of the process is developing is (I think) why there is so much recent concern about potentially negative applications like autonomous killing machines.
The scary thing is that these technologies could be combined *right now* to create such a system that was not controlled directly by humans but made automated decisions and flew drones to carry out those decisions. The technology to shrink these type of deep neural net systems to identify people is so good it can even be made simple enough to run on a phone for things like language translation and could easily be embedded in a drone.
So I am with Musk, Hawking, and others who would urge caution by governments in developing these systems. Just because we can make it doesn’t mean it will do what we want. Just look at how well Facebook/Amazon/Google make suggestions for “other things you might like” to get an idea about how potentially disastrous automated killing systems could be.