Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Linear Regression using ChatGPT

[Originally published on March 7, 2023]

The ChatGPT is a large language model (LLM) from OpenAI that was released a few months ago. Since then, it has created lots of excitement in terms of a whole range of possible uses for it, lots and lots of hype, and a lot of concern about harm that might result from its use. Within five days after its release, the ChatGPT had over one million users and that number has been growing since then. The hype arising from ChatGPT is not surprising; the field of AI from its inception has been hyped. One just need to be reminded of the Noble Prize winner Herbert Simon’s statement “Machines will be capable, within twenty years, of doing any work that a man can do” made in 1965. Several concerns about the potential harm due to ChatGPT’s use have been expressed. It has been found to generate inaccurate information as facts that is presented very convincingly. Its capabilities are so good that Elon Musk recently tweeted “ChatGPT is scary good. We are not far from dangerously strong AI.”

Since ChatGPT’s release, many companies and researchers have been playing with its capabilities and this has given rise to what is being characterized as Generative AI. It has been used to write essays, emails, and even scientific articles, prepare travel plans, solve math problems, write code and create websites among many other usages. Many companies have incorporated it into their Apps. And of course, Microsoft has integrated it into its Bing search engine.

Given all the excitement about it, I decided to use it to build a linear regression model. The result of my interaction with the ChatGPT are presented below. The complete interaction was over in a minute or so; primarily slowed by my one finger typing.



So, all it took to build the regression model was to feed the data and let the ChatGPT know the predictor variables. Looks like a great tool. But like any other tool, it needs to be used in a constructive manner. I hope you like this simple demo of ChatGPT’s capabilities. I encourage you to try on your own. OpenAI is free but you will need to register.











Reinforcement Learning with Human Feedback: A Powerful Approach to AI Training

The unprecedented capabilities exhibited by the large language models (LLMs) such as ChatGPT and GPT-4 have created enormous excitement as well as concerns about the impact of AI on the society in near and far future. Behind the success of LLMs and AI in general lies among other techniques a learning approach called Reinforcement Learning with Human Feedback (RLHF). In this blog post, we will try to understand what RLHF is and why it offers a powerful approach to training AI models. However, before we do that, let's try to understand the concept of reinforcement learning (RL).

What is Reinforcement Learning (RL)?

RL, inspired by the principles of behavioral psychology, is a machine learning technique wherein the learner, called an agent, learns decision making by exploring an environment through a trial-and-error process to achieve its goal. Each action by the agent results in feedback in the form of a reward or punishment. While performing actions and receiving feedback, the agent tries to maximize the expected cumulative reward over time. The figure below shows the basic working of the RL algorithm. 




The agent has a repertoire of actions to choose from at any given instant. Depending upon the environment, the action space is discrete or continuous. For example, the action space is discrete for an agent learning to play a board game. On the other hand, the action space is continuous for an agent, autonomous robot, learning to stay in a driving lane. The choice of the agent's action at a given time is governed by policy. The policy could be deterministic or stochastic and its implementation is done either by a table lookup, or a simple function or via search.

The environment refers to the world in which the agent interacts. The term state is used to describe the observation of the environment at any time which the agent uses as an input to decide its next action. As a result of the agent's action, the state of the environment changes leading to a new input to the agent. For example, the positions of chess pieces on a chess board at any time defines the state of the environment for an agent learning to play chess. Once, the agent makes a move, the state of the environment changes; the state of new environment is then used by the agent for its next action. 

The agent's actions result in reward, positive, neutral, or negative. To ensure that the agent is not focussed on short-term rewards, a value function of the state-action pair is specified to estimates the expected long-term. To train the agent to achieve its goal, a policy-based or a value-based implementation is used. The policy-based implementation involves coming up with a policy or deterministic/stochastic strategy to maximize the cumulative reward. The value-based implementation tries to optimize the chosen value function. 

Applications needing sequential decision making are excellent candidates for RL. It has been successfully used in autonomous robotics, finance, recommendation systems, and gaming. The AlphaGo from DeepMind is the most well-known example of RL. AlphaGo was the first computer program able to defeat a professional human Go player, a significant achievement given that Go is known as the most challenging classical game for artificial intelligence because of its complexity.

While the traditional RL algorithms have been successful in solving many complex problems, their adoption in the real world has been slow. One limiting factor is the task of designing reward functions that accurately capture the desired behavior can be a daunting and time-consuming task. Moreover, in complex real-world scenarios, defining appropriate reward functions can be highly challenging or even impractical. Reinforcement learning with human feedback (RLHF) addresses this challenge by leveraging human feedback to provide a more effective and efficient learning signal.

Reinforcement Learning with Human Feedback (RLHF)

RLHF was originally developed for training simple robots in simulated environments and Atari games. The key idea behind RLHF is to involve human trainers who interact with the AI agent and provide evaluative feedback on its actions. As an example, imagine a robotic arm being trained to grasp objects. Instead of relying solely on the predefined rewards from the environment (such as success or failure of the grasp), a human trainer provides explicit reward signals. The trainer observes the robot's actions and assigns positive or negative rewards based on the quality of the grasp. This feedback helps the robot learn more quickly and accurately. The feedback can take various forms, such as binary signals indicating whether an action is correct or incorrect, preference rankings among different actions, or even more nuanced feedback like explanations or demonstrations. By incorporating this feedback into the learning process, RLHF algorithms can learn from human expertise and accelerate the training process.

There are several approaches to implementing RLHF. One common technique is known as reward modeling, where human trainers provide explicit reward signals instead of relying solely on the environment's predefined rewards. The RL agent then learns from these human-generated rewards to optimize its behavior. Another approach involves interactive learning, where the agent actively seeks feedback from human trainers during its exploration phase. The trainers can guide the agent by providing corrective feedback or demonstrations, helping it learn more efficiently. The process of collecting human feedback and refining the model through reinforcement learning is repeated iteratively, resulting in continuous improvement in the model's performance.

The benefits of RLHF are numerous. Firstly, it reduces the sample complexity of RL algorithms, enabling faster and more efficient learning. By incorporating human expertise, RLHF algorithms can leverage existing knowledge and generalize to new situations more effectively. Secondly, RLHF allows for more precise control over the agent's behavior. Human trainers can steer the learning process towards desired outcomes, ensuring that AI systems adhere to specific ethical guidelines or safety constraints. This control and transparency are crucial when deploying AI in critical domains such as healthcare, finance, or autonomous vehicles.

RLHF also bridges the gap between AI and human collaboration. By involving human trainers in the learning loop, RLHF fosters a symbiotic relationship between humans and machines. Trainers can learn from the agent's behavior and iteratively refine their guidance, resulting in a continuous learning feedback loop that benefits both parties. Furthermore, RLHF enables the development of AI systems that can adapt to changing environments or user preferences more effectively, as trainers can update the feedback and influence the agent's behavior in real-time.

RLHF in ChatGPT and GPT-4

OpenAI has used RLHF to train the ChatGPT and GPT-4 models. The full details are available in a paper titled Training Language Models to Follow Instructions with Human Feedback. Here, I will briefly outline the three steps for applying RLHF to a pre-trained language model.

  1. The first step is to collect a dataset of human-generated prompts and responses, and fine-tune the pre-trained language model. 
  1. The next step is to have humans rank the model responses to prompts and use these rankings to train a reward model. 
  1. The final step is to use the reward model as a reward function, and fine-tune the model to maximize this reward. 

The above steps may be repeated to ensure that the model responses are aligned with human responses. The RLHF paper from OpenAI indicates using 40 human labelers, selected through a screening process, to generate responses. The prompts for the task consisted primarily of diverse text prompts submitted to a commercial language model plus a small number of labeler-written prompts. 

Limitations and Challenges of RLHF

Despite its promises, RLHF also poses its own set of challenges. Incorporating human feedback introduces biases and subjectivity that need to be carefully addressed. The trade-off between the trainer's guidance and the agent's exploration is a delicate task. making language models aligned with user intentions makes them more useful, it also makes them more prone to misuse by making it  easier to use these models to generate convincing misinformation, or hateful or abusive content. Scalability also becomes a concern when deploying RLHF algorithms in large-scale applications with numerous trainers or in scenarios where continuous feedback is required. 

Nevertheless, RLHF represents a compelling avenue for advancing AI capabilities. By merging the strengths of human intelligence with the computational power of AI, RLHF holds the potential to accelerate the development of intelligent systems capable of solving complex real-world problems. We can anticipate exciting breakthroughs and applications that harness the collaborative power of humans and machines as researchers and engineers continue to explore this field.



Exploring Large Language Models: Types and Applications

Large language models (LLMs) are currently the craze. Who hasn't heard of ChatGPT that can deliver all kinds of responses to user prompts, be a recipe or suggestions for vacation or an essay on a topic for a term paper. It is all possible because of the underlying large language models.

So what are large language models? How do these models work? What can we do with these models? Let's try to answer these questions without going into much technical details.

What are Large Language Models?

We will begin by first trying to understand what is a language model. Think about using your cell phone for messaging. As you enter text, your cell phone tries to guess the word you are typing, see the figure below. Under the hood, a language model is computing probabilities for the next character/word and is displaying the top three or five most probable characters/words. 


There are a few types of language models such as rule-based models, statistical language models, and the recurrent neural networks (RNNs). The rule-based models rely on predefined linguistic rules and heuristics to perform their calculations. These models require experts to manually create and fine-tune rules, making them inflexible and limited in handling complex language patterns. 

Statistical language models use probabilistic methods to estimate the likelihood of a sequence of words. These models utilize n-grams, which are sequences of n words, to predict the probability of the next word based on the previous ones. While statistical models offer improved language processing capabilities, they still struggle with understanding context and long-range dependencies.

RNNs are neural networks with memory; these are designed to process sequential data, making them ideal for modeling language. The internal memory enables them to consider context from previous words while predicting the next word. However, standard RNNs are unable to capture long-term dependencies due to a training bottleneck, the "vanishing gradient" problem. 

The large language models are deep learning models that use the transformer architecture to learn the dependencies among words. These models have 100+ billion parameters that are set by training. There are a number of features of the transformer architecture that have made them the architecture of choice for sequential data. Even, images can be used with the transformer architecture by considering them as sequences of small blocks of pixels. The foremost feature of the transformer architecture is the self-attention mechanism which weighs importance of different words in a given context. It, thus, allows the  transformer architecture to capture dependencies across the entire input sequence making them highly effective in language modeling tasks. Another important feature is that the architecture looks at all the input words of a sentence at the same time which is a key to the use of the attention mechanism. 

The transformer architecture consists of two main components: the encoder and the decoder. Both the encoder and decoder are composed of multiple layers of self-attention and feedforward neural networks. The encoder receives an input sequence and produces a sequence of hidden states. The decoder accepts a target sequence and uses the encoder’s output to generate a sequence of predictions. Exceedingly large amounts of text data, sourced from books, websites, wikipedia, and multitude of other sources, are used to train the transformer model. The training is done by following the self-supervised learning modality. Typical approaches to self-supervised learning is to mask certain amount of text and train the transformer to predict the text. Instead of masking, the next sentence prediction is also used for training. It is the self-supervised learning approach that has made the training of large language models removing the need for expert annotators.

Pre-trained LLMs

There are a multitude of pre-trained large language models that have been released for use. Before listing some of the popular pre-trained models, let's categorize them in terms of their architecture and usage.

  • Encoder-only Models
  • Decoder-only Models
  • Encoder-Decoder Models

The encoder-only models are the models that are trained to predict masked or missing words. The pre-trained models produce a high-dimensional vector representation of the input text, known as embeddings. [You can read about embeddings at the post "Words as Vectors".] These models can be fine-tuned for a variety of NLP tasks, such as sentiment analysis, named entity recognition, and question answering. These models are also called auto-encoding models.

The decoder-only models as one would expect use only the decoder part of the transformer architecture. These models are generally trained by having the model to predict the next word of the input text. These models are best suited for text generation. These models are also called autoregressive models.

The encoder-decoder models use both the encoder and the decoder components of the transformer architecture. The pre-training of these models replaces a chunk of the input text by a single mask and the model is trained to predict the entire chunk of the masked input text. These models are also known as sequence-to-sequence models. These models are suitable for text summarization, translation, or generative question answering tasks.

In many cases, you want to adapt a pre-trained model for your specific task in a particular domain, for example finance. This is done by applying transfer learning to the pre-trained model with a dataset specific to the application domain. Such models are called fine-tuned models.

Examples of Large Language Models (LLMs)

Below is a non-exhaustive list of LLMs.

1. GPTs

The GPT (Generative Pre-trained Transformer) series of models from OpenAi is perhaps the most well-known LLMs. The release of ChatGPT based on GPT3.5 in November 2022 kind of created an artificial intelligence storm. This series of models are decoder-only models and are being used for text generation, summarization and question-answers. GPT-4, the most recent model in the series, is being used in Microsoft's Bing Chat.

2. LaMDA

LaMDA which stands for Language Model for Dialogue Applications is a LLM from Google. It was trained on dialogue and thus exhibits superior conversational performance. It is mainly being used internally at Google and an earlier version of Google Bard was based on this model.

3. PaLM-2

This model was released by Google in May of this year. It is a state-of-the-art language model with improved multilingual, reasoning and coding capabilities. It was trained with text from over 100 languages, scientific papers, and code from numerous public sources. As a result, PaLM-2 is claimed to offer multilingual, reasoning, and programming capabilities. The current version of Google Bard is based on PaLM-2.

4. LLaMA

This model was released by Meta in February earlier this year. It is an auto-regressive language model and comes in different sizes: 7B, 13B, 33B and 65B parameters. It is good for question answering, and reading comprehension tasks.

5. BERT

BERT from Google stands for Bidirectional Encoder Representations from Transformers. It is an encoder-only type LLM. BERT uses bidirectional context to generate representation for words. What this means is that in the sentence "I bought an apple phone", the unidirectional context for encoding the word "apple" is "I bought an" while the bidirectional context brings in the next word "phone" also. Clearly, the bidirectional context provides a more targeted representation. BERT has been used for question-answer, sentiment analysis, and text classification. DistilBERT is a compressed version of BERT with fewer parameters but with equally good performance.

6. T5

This is an encoder-decoder transformer model from Google. It is suitable for tasks including machine translation, question answering, abstractive summarization, and text classification.

An Example of LLM Usage

Here, we are going to take a look at using LLMs for our daily tasks. The example that we are going to look at about using ChatGPT to get code for building an app to perform next day stock price prediction. We will give a prompt to ChatGPT specifying what we want. The prompt and the response from ChatGPT are shown below. If we want to do this on your own end, you will need to get an account with OpenAI.


In the subsequent prompts, I ask ChatGPT to give me code for downloading stock data. Then I prompt ChatGPT to make a python app out of it. All of this is performed satisfactorily and the app works fine. You can read about the responses from ChatGPT and get the complete code at "Create a Simple Stock Price Prediction App using ChatGPT" blogpost. 

Issues with LLMs Usage

Many organizations including Microsoft have been quick to deploy LLMs in their products. At the same time, a large group of researchers have been concerned with potential harms that can result with LLMs becoming more and more powerful. Some issues that have emerged from the current LLMs are:

1. Incorrect and Made-up Answers

Instances of incorrect and fabricated yet convincing responses have been reported by many. Fabricated and inaccurate answers. Thus, the responses from LLMs shouldn't be taken at face value and must be reviewed before usage.

2. Data Privacy and Confidentiality 

One needs to observe caution as any sensitive, confidential, and proprietary information used in prompts may end up being included in responses to other users. 

3. Model Bias

The LLMs have been found to exhibit bias which arises from the data from the wild that is used for training them. Bias exhibited by a model in use can create legal issues. 

4. Intellectual and Copyright Issue

Since models like ChatGPT have been trained using data from the web, the training data includes copyrighted material available of the web. This can result in copyright violations.

5. Fraud and Scamming Risk

Given that it is easy to generate fake data and misinformation, the scams using LLMs are definitely going to increase. As a consumer, we need to be on alert for such possibilities.

Going Forward with LLMs

LLMs are here to greatly impact the society on almost all of its facets. There will be large benefits from the use of LLMs and at the same time certain challenges are emerging in terms of dealing with fake yet convincing looking information being spread. While the thrust of LLMs development so far has been on producing bigger and bigger models, it appears that the focus is shifting to making LLMs more efficient and more accurate in their responses. We are also going to see the models being made domain specific.

I hope you enjoyed reading this exploration of LLMs. If you want to learn more, I suggest you visit huggingface transformer library where you will find information on many transformer models as well as demos to show their usage.