top of page
Sypar

Artificial intelligence (AI)

Artificial intelligence («AI») is a branch of computer science and is currently one of the most studied topics in the world. In general, it is a widely used and constantly evolving technology, with a huge and powerful set of computer programming techniques. AI systems predominantly manifest themselves as software-based applications or platforms but are often integrated into systems that combine hardware with software (such as in robotics). Despite its prominence, there is still no universal definition for AI, or for the technologies or methods associated with its operation. In addition to the lack of universal definitions, there are multiple definitions for various types of AI, which often vary according to their nature or main characteristics. Among others, «Narrow AI», «Artificial General Intelligence», «Artificial Superintelligence», «Responsible AI», «Trustworthy AI», «Ethical AI», «Explainable AI», «Adversarial AI» stand out. The multiplicity of associated concepts makes this branch highly complex. The approaches and techniques used also vary according to the different sectors of activity. In 2020, the European Commission presented a proposal for a regulation to harmonize the rules on AI. Internationally known as the «AI Act», it applies to all those who want to create or implement AI systems in the EU, regardless of their location, because whenever the results of the system produce, or will produce, effects in the EU, it covers both users, suppliers and distributors in the EU and in third countries. Areas, or systems, that are outside the jurisdiction of the EU and the authority of Member States in matters of national security are excluded, and therefore do not include AI systems exclusively for military use, research, innovation or personal non-professional use. The definition of «artificial intelligence system» provided for in the final text of the AI Act was based on the definition that has been developed by the OECD Artificial Intelligence Policy Observatory - OECD.AI over recent years. Specifically, according to the AI Act, an AI system is described as «a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments». With this definition, the EU wants to differentiate AI from «simpler software systems». This new regulatory framework, which has since been approved and published, categorizes AI systems into four different levels of risk, namely «unacceptable risk», «high risk», «limited risk» and «minimal or no risk». In this context, certain AI applications have been considered an unacceptable threat to fundamental rights and freedoms, which has led to their prohibition. Among the main applications banned, or restricted, are certain applications for biometric identification systems, facial or emotion recognition, manipulation of human behavior (overriding free will), predictive policing, as well as systems for exploiting vulnerabilities based on age, disability or social and economic status. Alongside the AI Act, the EU has also developed the «Coordinated Plan on Artificial Intelligence», essentially with a view to accelerating investment in AI, implementing AI strategies and programs and aligning AI policy to avoid fragmentation in Europe.

Machine learning (ML) & deep learning (DL)

Machine learning («ML») began with classical statistical methods developed between the 18th and 20th centuries, mainly for processing small data sets. In the 1930s and 1940s, computer visionaries, including mathematician Alan Turing, began the groundwork that led to the discovery of ML methods, or techniques. However, these methods were only implemented at the end of the 1970s, specifically when computers with sufficient power were developed for the first time. From then until now, learning methods have been widely developed and multiple subcategories and strands of ML have been created. In 2022, the International Organization for Standardization - ISO and the International Electrotechnical Commission - IEC released a set of cross-sector definitions for the concepts associated with AI. According to these definitions, ML is a subcategory of AI that allows a system to learn and improve its performance without the need to explicitly program all of its instructions and based on the knowledge it acquires through the training process. Knowledge is acquired by identifying patterns in sample data which, together and in combination, lead to the construction of a model of algorithms based on these patterns. This model is usually applied to the processing of new data to make predictions or generate other types of results, such as text translation. Depending on the specific practical application, the training process can involve various forms of learning, including «supervised learning», «unsupervised learning», «semi-supervised learning», «reinforcement learning», «self-supervised learning», «transfer learning» and «active learning». Currently, and regardless of the form of learning, «artificial neural networks» («ANN») serve as the basis for most contemporary ML practices, including «deep learning» («DL»), which represents a subcategory of ML. According to ISO/ IEC cross-industry definitions, DL is a form of ML that uses ANN to identify patterns in sample data and generate appropriate results. This type of system is used for complex tasks and has contributed significantly to the development of other areas of AI, such as voice and image recognition, object detection and autonomous driving. Even so, for the time being, chatbots such as ChatGPT (OpenAI), Copilot (Microsoft), Claude (Anthropic) or Gemini (Google) are among the practical applications best known to the general public. This recent progress is attributed to factors such as the greater availability and abundance of training data, increased computing power, substantial investments in IT infrastructures and innovative applications of disruptive technologies.

General-purpose artificial intelligence (GPAI) & foundation model (FM)

Usually, the term «general-purpose artificial intelligence» («GAI», or «GPAI») refers to models that are versatile enough to be used in a wide range of applications, such as «foundational models» («FMs»). In turn, FMs consists of «machine learning» («ML») models that are highly trained on a large set of data, which allows them to be easily adapted to a wide range of generic tasks, such as producing results through «generative artificial intelligence» («GenAI»). These models use different methods, namely supervised or reinforcement learning, as exemplified by OpenAI's use of reinforcement learning to refine the ChatGPT models and mitigate as much as possible the risk of inaccurate or potentially harmful results, such as violent or biased content. The FMs are also made up of «large language models» («LLMs») that undergo extensive training based on large sets of text data in order to perform multiple language-related tasks, including text processing and generation. In addition to ChatGPT, other current LLMs, such as those that incorporates Copilot (Microsoft), Claude (Anthropic) or Gemini (Google), fall into this category. In this sense, the emergence of the concept of a FM represents a change in the development and implementation of AI models, as they are now adaptable to a wide variety of generic tasks in the field of GenAI (such as translating and summarizing text, answering questions and generating new text, images, audio or visual content, based on text or voice commands). Companies developing these types of FMs can choose to offer the utilities directly to consumers, but also to other companies looking to adapt the models for specific applications. Some models remain private and are used confidentially within a particular company, while others (or components of them) are publicly accessible for download, modification and distribution under license. There are also models that are stored on cloud platforms and made available via an interface that allows users to access and make adjustments without fundamentally altering the underlying model. Today, GPAI technologies are already prevalent in our daily lives, whether through virtual assistants, search engines, navigation software, online banking, financial services or facial recognition systems.

Large language models (LLMs) & natural language processing (NLP)

In simple terms, a «large-scale language model» («LLM») consists of a type of «foundational model» («FM») trained with large amounts of data to perform various language-related tasks, including in «natural language processing» («NLP»). NLP is a subcategory of AI and uses «machine learning» («ML») and «deep learning» («DL») methods in computer programming to understand, interpret and generate human language. In terms of processing, algorithms examine the linguistic patterns used to construct sentences and paragraphs in order to understand how words, context and structure simultaneously contribute to meaning. LLMs are applied to a variety of tasks, such as converting speech into text, online tools for summarizing text, chatbots, speech recognition, language translation, etc. Although not all language-related tasks require the use of LLMs, the latest developments in the field of LLMs have made them a reality. Given the latest developments in terms of LLMs and NLP, it is becoming increasingly difficult to distinguish between human-generated language and machine-generated language. However, there are still some limitations, namely the difficulty in dealing with linguistic nuances, inconsistent construction of precise sentences, limited contextual understanding, and the substantial number of resources required for training. The term «Frontier AI», as derived from the UK Parliament's supporting reports, refers to GPAI made up of highly efficient LLMs capable of performing a wide variety of tasks.

Generative artificial intelligence (GenAI)

Traditionally, «machine learning» («ML») methods focused predominantly on predictive models, used to observe and classify content patterns. The development of «artificial neural networks» («ANN»), «deep learning» («DL»), «generative adversarial networks» (GAN), and «transformers», has led to a significant advance in terms of «generative artificial intelligence» («GenAI»). Unlike traditional ML methods, which analyze and classify existing data, the DL methods on which GenAI is based make it possible to produce new data with characteristics similar to the data with which the system was trained. In simple terms, the term GenAI refers to a subcategory of AI that, in response to user requests, produces text, images, audio, video or, depending on the case, other types of creative content (from music and other art), to virtual worlds or even new product designs and the optimization of business processes. Among the main applications of GenAI are chatbots, photo and video filters and virtual assistants. Currently, the result generated by GenAI applications can closely resemble the content produced by humans, depending on the quality of the methods and systems used. The result produced derives from a meticulously articulated fusion of the data used to train the algorithms. The large amount of data used in the training process leads the user to consider the result produced as «creative», and the variety of results generated from a single input request contributes to its realistic appearance. For companies, this type of tool represents a clear opportunity in terms of remote and automated customer service, but not only that. The use of GenAI has significant implications for various sectors. Essentially, any organization that produces creative or technical materials can benefit from the application of this concept. The time and resources saved by using this approach allow organizations to explore other business opportunities. Since developing a GenAI application currently still requires resources that can only be afforded by larger companies, smaller companies looking to use GenAI can take advantage of the solutions on the market and, depending on the options available, adjust them to suit specific tasks.

Predictive analytics

«Predictive analytics» systems use AI techniques such as «natural language processing» («NLP»), «computer vision» and «deep learning» («DL»). Over the last few years, they have proven to be effective tools for managing and mitigating risks in sectors such as healthcare, finance, banking, supply chains, e-commerce and the public sector. This type of technology also has the potential to transform legal risk management, as well as identify regulatory issues and improve legal research processes. In addition to significantly increasing the speed and accuracy of legal work, it allows for increasingly effective time management and, consequently, a reduction in costs. European law enforcement and criminal justice authorities have been looking to implement these types of AI systems, essentially with a view to profiling individuals and areas, predicting alleged future criminal behavior and assessing the degree of «risk» of future crimes or criminality. However, these systems appear to show biased and discriminatory tendencies. Their application in the police and judicial sphere seems to present multiple and serious risks, including the violation of freedoms, rights and guarantees, in particular the right to a fair trial and the presumption of innocence. One of the main problems associated with the use of predictive AI systems is precisely that they foster existing power imbalances and perpetuate structural discrimination based on certain factors such as racial and ethnic origin, socio-economic and professional status, disability, migratory status, nationality, among others. There are also some privacy concerns, since the vast majority of the data used for the development and subsequent application of these AI systems comes from personal data and activity logs on social networks. Misuse of this type of data could lead to privacy violations, excessive policing, and disproportionate and illegal surveillance. The AI Act restricts the use of predictive policing tools and criminal prediction systems.

Computer vision (CV)

Traditionally, this area of AI has focused on programming computer systems to interpret and understand images, videos, and other visual data in order to perform actions or make recommendations based on the information collected and processed. Over the last few decades, the development of the technology associated with «computer vision» («CV») has been a laborious, time-consuming and computationally complex process, requiring a vast volume of data and meticulously classified for supervised learning. The development of this type of technology dates back to the 1950s. Initially, CV focused on simple two-dimensional images for statistical pattern recognition. Possible practical applications became evident in 1978, when an approach was introduced that allowed 3D models to be extrapolated from computer-generated 2D sketches. Since then, and in light of recent advances in the field of AI, image recognition technologies have diversified and given rise to various categories based on use cases, including «object recognition», «medical image analysis», «navigation», «facial recognition», and «video surveillance». Among these categories, facial recognition technology stands out, essentially because of the risks it poses to fundamental rights, such as the right to privacy and data protection. The new rules of the AI Act cover, among other things, the prohibition of some AI applications that could threaten citizens' rights, such as the indiscriminate collection of facial images from the Internet or images from video surveillance cameras to create facial recognition databases.

Follow us

  • Telegram
  • LinkedIn
  • Facebook
  • Instagram
  • Tópicos
  • X
  • Youtube
  • TikTok
  • Medium
  • Discord

“Sypar” refers to “Sypar, Lda.”, a company incorporated under the Portuguese Law with the registration number 517796317, with its registered address at Rua Sacadura Cabral, nº 102, LJ 63, 2765-349 Estoril, Portugal.

© 2023 SYPAR | All rights reserved

Sypar
bottom of page