It's Both Tax And Ethics, Real Artificial Intelligence May Choose Not To Appear
It's Both Tax And Ethics, Real Artificial Intelligence May Choose Not To Appear
If you know that you have to pay taxes and face "moral trial", will real artificial intelligence still want to be born? Just like if you know
If you know that you have to pay taxes and face "moral trial", will real artificial intelligence still want to be born? Just like if you knew that you would be born in the Hard mode, would you want to re-arrange a number?
Leifeng.com reported that Bill Gates said in an interview last week that if robots replace humans to do the same job, they should pay taxes according to law, which can become a source of wages for many jobs, and taxes will be used to train the unemployed. Although this matter has not been decided yet, the poor robot will be "taxed" when he doesn't know what's going on.
His view is, “If everyone’s fear of the results of innovation is greater than enthusiasm, it means that people will not push them forward for the bright side and rely on taxes to control the situation is better than just banning them.”
Gates is well aware of the principle that "no problem can be solved with money." Aren’t you humans just afraid of unemployment and no income? You can take away the money and let our robots develop freely.
But things are not that simple. In addition to economic constraints, many institutions have attempted to control robots from the perspective of "ideology" and have established moral ethics committees.
The most popular one is Elon Musk's new participation. Musk himself said in an interview, "It is a non-profit organization that promotes the development of artificial intelligence. Essentially, it is a research and development laboratory that opposes large enterprises taking control of super intelligent systems and using them to make huge profits, and opposes the government's use of artificial intelligence systems to gain greater privileges and oppression of the people."
The organization's mission is not only to promote the development of digital intelligence, but also to discuss how humans should apply this technology ethically and effectively. Before this institution was officially established, it received $1 billion (or more).
Musk often advocates the theory of artificial intelligence threats, although his Tesla is also part of artificial intelligence, and there have been liability identification problems in car accidents. He has also received a lot of criticism for his big mouth attributes and frequent singing of black artificial intelligence. For example, Oren, CEO of the Allen Institute of Artificial Intelligence, said, "Musk is very irresponsible and hypocritical." However, he was not worried about "malignant" artificial intelligence, but believed that this was 50 years, or even 100 years later.
But there are indeed many people who plan ahead.
In its 2016 report, UNESCO and the World Committee on Scientific Knowledge and Technical Ethics discussed the advancement of robots that promotes artificial intelligence and the social and ethical issues brought about by progress. When the robot's operation failure causes accidents and causes responsibility problems, the report even proposes that intelligent robots can be held accountable because intelligent robots do have unprecedented autonomy and the ability to make decisions independently.
Leifeng.com also reported that killer robots ( ) have also appeared in the United Nations discussion topics.
The British Standards Institute (BSI) is more practical, which released a set of robot ethics guidelines last year. BSI is a British national standards body with a history of more than 100 years and has high authority around the world. The ethical guide it published is the "Guidelines for Ethical Design and Application of Robots and Machine Systems", which mainly targets robot design researchers and manufacturers, guiding them on how to make an ethical risk assessment of a robot. The ultimate goal is to ensure that the intelligent robots produced by humans can be integrated into the existing moral norms of human society.
In the face of cutting-edge topics such as robot ethics issues, academic institutions have also participated a lot.
Earlier this year, MIT Media Labs collaborated with the Berkman Klein Center for Internet and Social Research at Harvard University to launch an estimated $27 million AI ethics research program. Leifeng.com learned that they want to solve the humanistic and moral issues brought about by artificial intelligence, study how it should assume social responsibility (such as ensuring fairness in education and justice), and help the public understand the complexity and diversity of artificial intelligence.
Perhaps more importantly, the projects initiated by the companies that promote artificial intelligence themselves. Amazon, IBM and Microsoft (Apple later joined) jointly established the Artificial Intelligence Alliance (on AI), dedicated to solving the reliability of AI technology.
The nonprofit is committed to finding best practice guidance for addressing AI’s opportunities and challenges, and is also open to scholars, other nonprofits and experts in the field of policy and ethics to bring together more power.
Even this wave of artificial intelligence craze that started in early 2016 has a special liking for ethical issues. It is said that when the $650 million acquisition three years ago, one of the conditions for the transaction was that to create an AI ethics committee, it also had to agree that the developed technology could not be used for military or intelligence purposes.
Seeing so many rules and regulations, artificial intelligence may also pretend to be away after it is born.
Click on the keyword to view relevant historical articles
Popular lately
| |
| | |
| |
| |
| |
| |