What Is Artificial Intelligence (AI)?
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.
The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal. A subset of artificial intelligence is machine learning, which refers to the concept that computer programs can automatically learn from and adapt to new data without being assisted by humans. Deep learning techniques enable this automatic learning through the absorption of huge amounts of unstructured data such as text, images, or video.
Artificial intelligence is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to those that are even more complex. The goals of artificial intelligence include mimicking human cognitive activity.
Researchers and developers in the field are making surprisingly rapid strides in mimicking activities such as learning, reasoning, and perception, to the extent that these can be concretely defined. Some believe that innovators may soon be able to develop systems that exceed the capacity of humans to learn or reason out any subject. But others remain skeptical because all cognitive activity is laced with value judgments that are subject to human experience.
As technology advances, previous benchmarks that defined artificial intelligence become outdated. For example, machines that calculate basic functions or recognize text through optical character recognition are no longer considered to embody artificial intelligence, since this function is now taken for granted as an inherent computer function.
AI is continuously evolving to benefit many different industries. Machines are wired using a cross-disciplinary approach based on mathematics, computer science, linguistics, psychology, and more.
Applications of Artificial Intelligence.
The applications for artificial intelligence are endless. The technology can be applied to many different sectors and industries. AI is being tested and used in the healthcare industry for dosing drugs and different treatment in patients, and for surgical procedures in the operating room.
Other examples of machines with artificial intelligence include computers that play chess and self-driving cars. Each of these machines must weigh the consequences of any action they take, as each action will impact the end result. In chess, the end result is winning the game. For self-driving cars, the computer system must account for all external data and compute it to act in a way that prevents a collision.
Artificial intelligence also has applications in the financial industry, where it is used to detect and flag activity in banking and finance such as unusual debit card usage and large account deposits—all of which help a bank’s fraud department.
Applications for AI are also being used to help streamline and make trading easier. This is done by making supply, demand, and pricing of securities easier to estimate.
Types of artificial intelligence.
Today’s artificial intelligence is divided into four basic types, very similar to Maslow’s hierarchy of basic needs, as the simplest types of artificial intelligence can only perform basic functions, while the more advanced types are like an entity that is fully aware of itself and what revolves around it, and is similar to To a large extent human consciousness.
. Interactive machines only perform basic tasks, and this type of AI is the simplest of all. Machines that use this type respond to some inputs with some output, and their mechanism of action does not include any self-learning process.
2- (limited memory).
In this type, AI has the ability to store data, or past forecasts, and use them to make better predictions in the future. And with limited memory, engineering and building machine learning technologies becomes more complex.
3- (theory of mind).
Theory of Mind is the next stage of artificial intelligence systems that scientists are currently working on creating and developing. In this type, the machine will be able (thanks to artificial intelligence technology) to understand the entities that interact with it, and know their needs, feelings, principles, and even the thought process that they are carrying out.
In an unknown, distant future, humans may finally be able to develop a self-conscious AI. It is the same entity that we see in science fiction movies. This kind of AI may raise many hopes, but it also raises many fears. The idea of a self-aware robot with a special and independent intelligence is troubling, because this means that humans must then negotiate with the machine they made with their own hands, and the result of these negotiations gives way to many assumptions, expectations and imaginations.
How does artificial intelligence work.
Artificial intelligence works by combining large amounts of data with fast and iterative processing and smart algorithms, allowing software to automatically learn from patterns or features in the data.
Artificial Intelligence is a broad field of study that includes many theories, methods, and techniques, in addition to the following major subfields:
▪️ Machine learning automates the building of analytical models, using methods from neural networks, statistics, process research, and physics to find hidden insights into data without explicitly programmed for where to find or inferred.
▪️ A neural network is a type of machine learning, consisting of interconnected units (such as neurons) that process information by responding to external inputs, transmit information between each unit, and the process requires multiple passes in the data to find connections and derive meaning from non-specific data.
▪️ Deep learning uses massive neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns with large amounts of data. Common applications include image and speech recognition.
▪️ Cognitive computing is a subfield of artificial intelligence that seeks natural human-like interaction with machines, using artificial intelligence and cognitive computing, the ultimate goal is for a machine to simulate human processes through the ability to interpret images and speech, and then speak coherently in response.
▪️ Computer vision relies on pattern recognition and deep learning to recognize what is in an image or video, and when machines can process, analyze, and understand images, they can capture images or videos in real time and interpret their surroundings.
▪️ Natural Language Processing (NLP) is the ability of computers to analyze, understand, and generate human language, including speech. The next stage of NLP is the natural language interaction, which allows humans to communicate with computers using everyday ordinary language to perform tasks.
Additionally, several technologies enable and support AI:
▪️ GPUs are key to AI, as they provide the heavy computing power required for iterative processing. Neural network training requires big data as well as computational power.
▪️ The Internet of Things generates massive amounts of data from connected devices, most of which are unresolved, automating models using artificial intelligence will allow us to use more of it.
▪️ Advanced algorithms are being developed and combined with new ways to analyze more data faster and at multiple levels. This intelligent manipulation is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.
▪️ APIs, are portable packages of code that make it possible to add AI functionality to existing products and software packages.They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create comments and headlines, or invoke interesting patterns and insights into the data.
Pros of artificial intelligence.
▪️ To reduce human error, computers do not make human mistakes if programmed correctly.
▪️ Taking the risk instead of humans, like going to Mars, and defusing bombs.
▪️ Robots can operate all the time around the clock without feeling tired and exhausted like humans.
▪️ Help with recurring jobs.
▪️ Make decisions faster and more accurately.
▪️ Artificial intelligence powers many inventions in nearly every field that will help humans solve most complex problems.
The downsides of artificial intelligence.
▪️ The high cost due to the need to update the software to meet the latest requirements.
▪️ AI is making humans lazy by automating its applications for most work.
▪️ It becomes less human intervention which will lead to a major problem in employment standards leading to unemployment.
▪️ Machines cannot develop a relationship with humans which is an essential trait when it comes to managing a team.
▪️ Machines can only perform those tasks that they are designed to do, and they tend to crash or provide unrelated output when data is not stored in them.
“Artificial Intelligence (AI)”, www.investopedia.com
الذكاء الاصطناعي» هل سينقذ العالم أم سيدمّر الإنسان؟/ https://www-alroeya-com
كل ما تحتاج معرفته عن الذكاء الاصطناعي/ https://www-for9a-com
إيجابيات وسلبيات الذكاء الاصطناعي وأهم تطبيقاته/ https://io.hsoub.com