Day 5 - Artificial Intelligence (AI): ChatBSW » More questions
Zainab (host): Take a look at some more questions and answers relating to Artificial Intelligence below...
Question:
What is a large language model?
What is a large language model?
Answer:
A large language model (LLM) is a type of AI model that is trained to understand and generate natural language. They are trained on a whole range of texts, from books to Wikipedia. They are referred to as large because it has millions or even billions of parameters, which are dials that are tuned while training to capture nuances in text. ChatGPT is an example of a large language model.
A large language model (LLM) is a type of AI model that is trained to understand and generate natural language. They are trained on a whole range of texts, from books to Wikipedia. They are referred to as large because it has millions or even billions of parameters, which are dials that are tuned while training to capture nuances in text. ChatGPT is an example of a large language model.
Question:
What is the future of AI?
What is the future of AI?
Answer:
At the rate at which technology is moving, it is hard to predict. But a lot of the technologies we see today could be downstreamed into products, so we might see innovations in climate, healthcare, and self-driving cars. We should also see more progress in the use of Generative AI for scientific discovery.
At the rate at which technology is moving, it is hard to predict. But a lot of the technologies we see today could be downstreamed into products, so we might see innovations in climate, healthcare, and self-driving cars. We should also see more progress in the use of Generative AI for scientific discovery.
Question:
What are the dangers of AI?
What are the dangers of AI?
Answer:
There are quite a few, but they only make headlines when they go wrong. For example, if a self-driving car swerves and crashes. There are also bias and fairness concerns, so AI can amplify bias in society, which leads to unfair outcomes. This is a problem if the AI is used in hiring people for jobs or even criminal justice.
There are quite a few, but they only make headlines when they go wrong. For example, if a self-driving car swerves and crashes. There are also bias and fairness concerns, so AI can amplify bias in society, which leads to unfair outcomes. This is a problem if the AI is used in hiring people for jobs or even criminal justice.
Got a question? email computerscience@bt.com