What we will learn
- Ethics
Code of Ethics
### What Is a Code of Ethics?
A code of ethics is a guide of principles designed to help professionals conduct business honestly and with integrity. A code of ethics document may outline the mission and values of the business or organization, how professionals are supposed to approach problems, the ethical principles based on the organization’s core values, and the standards to which the professional is held.
GENERAL ETHICAL PRINCIPLES.
Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
- This principle, which concerns the quality of life of all people, affirms an obligation of computing professionals, both individually and collectively.
- An essential aim of computing professionals is to minimize negative consequences of computing.
- Computing professionals should consider whether the results of their efforts will respect diversity, will be used in socially responsible ways.
- In addition to a safe social environment, human well-being requires a safe natural environment.
Avoid harm.
- “harm” means negative consequences, especially when those consequences are significant and unjust.
- The harm include unjustified physical or mental injury, unjustified destruction or disclosure of information.
- Well-intended actions, including those that accomplish assigned duties, may lead to harm.
- When that harm is unintended, those responsible are obliged to undo or mitigate the harm as much as possible.
- Avoiding harm begins with careful consideration of potential impacts on all those affected by decisions.
Ethics in the workplace
Google and AI
Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance
-
Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company’s involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyze drone footage.
-
Google says it will continue to work with the United States military on cybersecurity, search and rescue, and other non-offensive projects.
-
Google CEO Sundar Pichai announced the change in a set of AI principles released today.
-
The principles are intended to govern Google’s use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI.
Ethics in Technology
Ethics of technology is a sub-field of ethics addressing the ethical questions specific to the Technology Age, the transitional shift in society where personal computers and subsequent devices have been introduced to provide users an easy and quick way to transfer information. Ethics in technology has become an evolving topic over the years as technology has developed.
Google AI Principles
We will assess AI applications in view of the following objectives. We believe that AI should:
1. Be socially beneficial.
- The expanded reach of new technologies increasingly touches society as a whole.
- Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment.
- AI also enhances our ability to understand the meaning of content at scale.
2. Avoid creating or reinforcing unfair bias.
- AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.
- recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies.
- seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
3. Be built and tested for safety.
- continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.
- design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.
- In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.
4. Be accountable to people.
- design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal.
- Our AI technologies will be subject to appropriate human direction and control.
5. Incorporate privacy design principles.
- incorporate our privacy principles in the development and use of our AI technologies.
- give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.