Article

Resolution - Ethics and responsibility: AI use in conflict zones

On November 1, 2019, syndicom members from the ICT sector adopted a resolution at a national conference on artificial intelligence (AI) and ethics, in which they also demanded that weapons with AI systems be banned. The topicality of this demand is particularly evident in the war in Gaza and Israel.

Resolution of the ICT sector of syndicom

On November 1, 2019, syndicom members from the ICT sector adopted a resolution at a national conference on artificial intelligence (AI) and ethics, in which they also demanded that weapons with AI systems be banned. The topicality of this demand is particularly evident in the war in Gaza and Israel.

The revelations about Project Nimbus, in which the state of Israel concluded contracts with Amazon and Google for cloud services, have already raised questions about the use of AI technologies.As media research shows, Google also appears to have received orders directly from the Israeli Ministry of Defense to provide cloud computing services. Google is also said to have been negotiating a deepening of the partnership since the beginning of the war.

Particularly in the context of the use of the “Lavender” and “Gospel” AI systems by the Israeli army, such contracts should be questioned even more urgently. The targeting systems were apparently used in such a way that a large number of civilian casualties were accepted. Although “Lavender” and “Gospel” still require a human decision, this task appears to have been carried out only formally at the beginning of the Gaza war.

At this point, syndicom would like to remind everyone that fundamental and human rights as well as constitutional and democratic principles must be protected, respected and accompanied by remedial mechanisms when using AI. Completely autonomous AI systems are ethically untenable. Humans are needed in the loop; humans must remain responsible. This must be ensured by the providers of such AI systems. syndicom shares the concerns of Google employees who speak out against the misuse of the technologies they develop for purposes of oppression or war.

As the trade union syndicom, we stand up for the rights of workers and demand fair and ethical use of technology. The use of AI for surveillance purposes or to support military operations violates our principles of solidarity and respect for human rights.

We therefore call on companies like Google to take their responsibility across the entire supply and value chain to ensure that their technology is not used for purposes that violate human rights or exacerbate conflict. It is essential that technology companies like Google are transparent and publicly commit to ethical principles that ensure that their products and services cannot be misused to support oppression or violence.

Employees at such companies should also be informed about what the code they are developing will be used for. Employees who stand up for these principles must not suffer any professional disadvantages as a result. We fundamentally condemn repression by employers against employees because of their political activism and commitment or their personal stance.

After all, AI systems should also be regulated on a political level - globally if possible.

Stay informed

Personally, quickly and directly

You would like to know what we are committed to? Subscribe to our newsletter! If you have any personal concerns, our regional secretaries will be happy to help you.

syndicom in your vicinity

You will find competent advice and support in the regional secretariats.

Subscribe to newsletter