-a friend of mine posted on his facebook status. i read this and immediately began to funnel through thoughts. To begin simple, i thought about a current predicament i had been placed in.
I am set to graduate this May, with a mere 11 credits keeping me from walking the stage. Enrolled in three 3-credit classes and an internship, I anticipated a relaxed semester. On the contrary, within the first week of classes I found myself overwhelmed with time management difficulties. After much thought, the decision to drop a class and prepare myself for a CLEP exam became conclusive. In this scenario I felt that I made the “most intelligent” decision.
However, when I questioning a wider debate, my decision became hesitant. What constitutes the “most intelligent”? Let’s say a company was constructed to develop green technologies, such as solar panels and wind turbines. The company plans to reduce use of nonrenewable resources and promote a “greener” environment. However, to accomplish this, the company must destroy much on the environment that will be irreplaceable. The development of the company will damage the very Earth it plans on saving. So what then? What is the “most intelligent” decision? To allow the company to destroy the environment to save it?
Machiavelli’s idea of does “the end justify the means” comes into play here. If in the end the outcome is positive, does doing “wrong” initially justify the outcome? The whole idea of “most intelligent” decision baffles me to no end. We have morals and opinions that change our decisions. And so we have debates and arguments, which eventually escalate into fights and wars. But this is because we are human, with individualized thoughts.
So what about AI? Suppose artificial intelligence gets to a point in which androids are capable of making their own decisions. As a robotic mechanism, it’s decisions are made solely through its computations. A series of algorithms are in input into the computer system. Given the situation, the artificial intelligence acts accordingly to the scientifically “most intelligent” decision. Therefore, does a computer’s intelligence truly constitute what is really the best and most intelligent decision?
When it comes down to it, it is the human that programs the artificial intelligence. So the ultimate decision the computer decides upon is an extension of the programmer’s decisions. Does that make it the right decision? Furthermore, is there ever a “right” or “best” decision? We must consider who the decision affects when making the decision. If the decision only affects the decision maker, then sure, he/she can make the “best” decision. When the decision affects a wide target or the environment we must reevaluate the outcomes.
No comments:
Post a Comment