• Home
  • ELA Blog
  • About Me
  • Reading log
  • Symposium 2018
  • Writing portfolio
Picture
Our driving question :
Does the convergence of artificial intelligence and weaponry create a threat for humanity?
Picture
Picture
Picture
Picture
Picture
Picture
​


Personally - A.I. affects us personally because we see A.I. technology all around us from the many devices that people have in their homes or classrooms that utilize A.I. It also affects us because when we are older, we may live in a world where the military uses A.I. which as shown in our research can help or harm us. Evan Ackerman states ”There will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that’s accessible to anyone if they want to build a weaponized drone,”  which will allow random people with unknown intentions to use potentially dangerous and deadly weaponry for unethical purposes that could affect the world my peers and I will live in. 


IMPACT
Artificial intelligence has the potential to threatens our society and impact it in a dangerous way. As stated in the article, ”We should not ban Killer Robots, here’s why” the author notes, “you see something sinister on the horizon when there’s so much simultaneous potential for positive progress.” This shows that progress also comes with a draw back and if not balanced, technology would not help us. It may endanger our advancements in war and also our race. This impacts us and is a threat to humanity because the more weapons there are, the more greed there is to get them. Furthermore, the military can now engage in conflict from an even more discreet location as expressed by the quote “Military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated,” as stated in ¨Military Robots: Armed, but How Dangerous?¨ This shows that through time, the impact of weaponized A.I. can negatively impact how war is fought and how humans are affected. It creates ethical conflicts because the technology that may help us progress can harm our race, and the act of war is not longer in person but detached from the dangers of war, which creates a disconnect between people and the wars they fight. 

Unanswered questions

Questions we still have are:
  • Is an artificial intelligence arms race inevitable?
  • How will this impact our future?
  • Will the reliance on A.I. affect the way humans think?
  • Will this create an advancement with the way humans make their choices with their problems?

STUDENT-LED RESEARCH - OUR METHOD

The method that we chose was to create a survey because we can see the results that we got from the community and how they can relate to it. We asked six questions and our results show how 45 middle school students, who will grow up and live in a future where A.I. is more part of their lives, thought about the weaponization of A.I. One flaw or shortcoming of our method is that it doesn’t reflect how most people feel because we only received respondents from a small group of middle school students who do not represent the rest of the population. However, getting their opinions helps us understand our conflict within our own community of peers. 


Analysis: 
According to the data, shown above both Negative and Neutral got similar responses with 31.1%, while 20% of people chose that it will impact it positively.



Analysis: Out of the 45 responses that we got, only 4.4% (2 people) fully supported the use of A.I. in weapons. Then there was only 6.7% (3 people) that didn’t support it at all, while 48.9%(22 people) felt neutral.




Analysis:We learned that 6.7% (3 people) rely on artificial intelligence which is . Then we learned that 22.2% (10 people) don’t rely on AI at all which is . 




Analysis: The most responses that we got was in the neutral section with a 53.3% witch shows that people don’t really care what the government does. The second most voted choice was that 31.1% of people believe that this will help our military.



Analysis: The most selected choice was the second which stated that A.I. will provide… to show that they believed that it will provide more accuracy in times of war. The second choice they selected was the 4th one which they think that it will start or make wars worse.




Analysis: Lastly, the most feedback we received was that most people didn’t know but 35.6% of people thought that our military should do more research into this and 24.4% of people thought we should not go into this territory.

​
USE THESE BELOW OR WRITE NEW ANALYSIS UNDER Each oF YOUR SIX QUESTIONS! 


  • After we reviewed our responses we discovered that both negatively and neutral(31.8%) had the same vote for how the A.I may impact the human race over time.
  • Our second question is from “How do you feel about the government using technological innovations like artificial intelligence within weapons?” and we didn’t get much because most of them chose neutral.(50%)
  • Our third question is from “ How much do you rely on artificial intelligence in your daily life? Artificial intelligence can include Siri, Alexa, Cortana, etc…” and we learned that 20.5% don’t use most of these, and 6.8% of people rely on A.I.
  • Our fourth question is, “ Do you think that artificial intelligence can help our military or harm or military?” and the most responses that we got was a neutral with 52.3% and with a 31.7% of people thinking that it will help our military.
  • Our fifth question is, “ What are your concerns about using artificial intelligence in weaponry during times of war?” and the most responses we got were for “A.I will provide more accuracy in times of war” with a 77.3%
  • Our sixth and last question that we asked was, “Do you believe that our military should invest more into researching and exploring the weaponization of artificial intelligence?” and the feedback that we received wasn’t to great because 40.9% chose I don’t know but 34.1% chose that our military should do more research on weaponization on artificial intelligence.
Conflict is composed of opposing forces. 

Artificial Intelligence is composed of opposing forces with one being the call for banning militarized A.I, and the other accepting the use of A.I., and to continue to research it and develop it. Elon Musk and other A.I. experts call for a ban as it is stated in the article made by David Z. Morris, “One hundred and sixteen roboticists and A.I. researchers, including SpaceX founder Elon Musk and Google Deepmind co-founder Mustafa Suleyman, have signed a letter to the United Nations calling for strict oversight of autonomous weapons, a.k.a. “killer robots.” The letter describes the risks of robotic weaponry in dire term, it also warns that failure to act swiftly will lead on to “arms race” towards killer robots. The opposing side is to develop A.I. It has been stated by Evan Ackerman that “There’s been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we’ve had for the last half century


Conflict may be natural or man-made. 

Weaponized Artificial Intelligence is a man-made conflictbecause humankind wants power, and we want control. Humans can use A.I. to get power, even at the cost of human lives. Then, A.I. can be used in war to gain the upper hand because countries do not care how dangerous a weapon is if it helps protect their country and win the war. Some research by Will Knight  states “automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.” This would allow more night attacks and will be able to hit target accurately with less training. Since artificial intelligence was first coined by John McCarthy in 1965, when he held the first academic conference on the subject. A.I. researchers have been able advance and create more objects of war. As technology advances, the threat becomes more dangerous, and will increase the need for power. Evan Ackerman states,  ”There will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that is accessible to anyone if they want to build a weaponized drone” to show that the countries might go into a race for power, or a race for an upper hand in war.


Big idea
Conflict may be intentional or unintentional.

The use of A.I. as weapons in war is intentional. However, the resulting ethical conflicts that surround the weaponization of A.I. may not be intentional. In war A.I. is used to pilot unmanned vehicles protecting the lives of allied forces, but also endangering the lives of opposing forces. Since A.I. are not fully humans, they may still make mistakes in judgment or can be hacked to endanger their own side.  From the article, “AI (Artificial Intelligence),” it states that, “A.I. systems have a sense of self, have consciousness” which could mean that they might endanger people in orders to accomplish its purpose. Another quote from the same article describes an A.I. robot that currently can do certain actions, but cannot use memory. It states, “Deep Blue can identify pieces on the chess board and make predictions, but it has no memory and cannot use past experiences to inform future ones,”  which means that A.I. still has more to go in development. In wartime, humans use their memory and their ethical judgments to make decisions, but A.I. may not have that capacity. Overtime, A.I. may become something that may help and hurt us in times of war. At the Association for the Advancement of Artificial Intelligence in 2015, Evan Eckerman stated, ”banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil” showing that weaponized A.I. is intentional.

Intentional
  • Creating reactive machines that analyze possible moves and it chooses it strategic moves.
  • Integrating self-awareness into A.I.
  • Giving weapons to A.I.
  • Using A.I. technology in self-manned vehicles and planes.


Conflict may allow for synthesis and change. 

A.I. creates a threat to humanity because the technology used to create it opens new doors of possibilities to be helpful or hurtful. In war, humans are the go to weapon but at the risk of our own lives, so technology is starting to take that role. According to an article from Anthony cuthbertson, he states that “A.I. makes it possible for machines to learn from, adjust to new inputs, and perform human-like tasks.” This shows that technology is starting to match or maybe even surpass the intellect of humans which would make them an excellent war weapon, and although they are getting close to our intellect, they will follow orders without question, no fear for their lives. Also an article known as “We should not ban ,‘Killer Robots,’here’s why,” the author states that ”there will be some sort of arms race that will lead to the rapid advancement and propagation of things like autonomous “armed quadcopters,” eventually resulting in technology that’s accessible to anyone if they want to build a weaponized drone.” This shows that people, war, and technology all come together to create weaponized A.I. and this will create a definite threat to humanity especially when everybody is given access to it .

Changes: the transition from man to man combat to robot vs robot  


Conflict is progressive. 

A.I. keeps changing as technology continually progresses and improves. “Is Artificial intelligence dangerous” an article by R.L. Adams states ,“AI will clearly pave the way for a heightened speed of progress” showing that progress is making changes to the way we live our lives, and the way people fight their battles in social life or in war. A.I. is progressive because as the years pass they all work together like how they use the internet and robots to create special aircrafts/drones, as stated from the article, “Military Robots: Armed, but How Dangerous?”  using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.” Furthermore, A.I. technology is seen more and more in people’s everyday lives such as through their smart homes, smart phones, smart speakers, and help like Siri or Alexa. Therefore, using A.I. and its role in our lives is becoming more a part of the world. It is with no question that these technologies are advancing even more through the military, where militarized or weaponized A.I. can be developed overtime. This would contribute to the conflict continuing being progressive.

LANGUAGE OF THE DISCIPLINE 


  • Artificial intelligence is when a machine is programmed to think like a human. An article published by Nick Heath in February 12, 2018, mentions, “artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.”

  • Weaponry can be defined as weapons or other things to harm people as it states in the Dictionary, ”weapons regarded collectively.”

  • PARALLELS 
    Global- This can affect our globe positively or negatively due to our nation’s leader’s decisions. Evidence to back up the negative side is stated in our SLR responses on a question. The question was, “How do you think artificial intelligence may impact the human race over time?” and our feedback was that out of all the choices 31.8% of their responses was negative meaning that A.I. will impact humans negatively. Our other side of the parallel is how it will help our world. Some research that shows this is from the article published by Andy Patrizo in 2017 that mentions ,“A.I. allows for more intricate process automation, which increases productivity of resources and takes repetitive, boring labor off the shoulders of humans. They can focus on creative tasks instead" and this can relate to the positive side because it can mundane task.

    Community- This can relate to our community because it has two sides with one being the fact that our military is using A.I. for weaponry while on the other side our community is using it to deliver packages, film video or capture photos, create security in our home systems, listen to requests for playing music or searching information, or talking to other apps. The military side is discussed in an article by Robert W. William who mentions, “It could provide unpredictable and adaptive adversaries for training fighter pilots.” This summarizes that the military is using A.I. to help train the fighter pilots to make them visualize what it will be like to be in the air and flying around. Our second side revolves on our community using A.I. to send or deliver. Some research that proves this is found in the article made by Murray Newlands who mentions, “As of now, chatbots can be integrated into numerous applications, which allows them to receive payments and check on orders.” This is just one way that A.I. is part of our community and the way that we live using apps, helpers, smart phones, and smart homes. ​
reflection 1
Our article 15 cool new military weapons joining the fight is about the new experimental weapons being used in wars. In this article I learned that that there are weapons used in the military which resemble those of science fiction witch symbolize our technological advances.The article Using sound to attack: the diverse world of acoustic devices  is about the attack on americaś diplomats in cuba.these weapons have given the diplomats brain disorders and hearing loss.

reflection 2
We found out that Technology was helping induce war through the fact that countries now have more powerful weapons and are willing to use them to protect their country. Yes our driving question changed to Does the weaponization technology positively or negatively impact humans in times of war and peace?
On the ISD we are doing a Survey. Everything in our project has been difficult especially the ISD.we need to get our Student lead research approved.

reflection 3
No we did not learn anything on our previous driving question. We changed our driving question to Does the convergence of artificial intelligence and weaponry create a threat for humanity? We still are very behind on our project at this moment but we are starting to get everything done at timely matter since symposium will be soon. We believe we want our trifold board to be a background of camouflage Max is working on the ISD while I get all of our information approved.

reflection 4
​We have started to work on the creative piece for our project we have chosen to do A robot with painted on camouflage to represent the way technology is progressing, and can be be used in war and it will hurt or assist us in life.what I would tell any oncoming symposium student is that they should get there work done efficiently and to ask Mrs. Park as many questions as you can.
Picture
this is our Big Idea chart it shows how our driving question revolves around thes five generalization such as  apposing forces, natural or man made, intentional or unintentional, progressive, and will alow for synthesis for change.
 These are  the three parts of our relavent resolution map. Two of the three parts show the strengths and weaknesess of  the  two arguments and the one at the center  shows what we can do to prevent the conflict and it solution.
Powered by Create your own unique website with customizable templates.
  • Home
  • ELA Blog
  • About Me
  • Reading log
  • Symposium 2018
  • Writing portfolio