NEXT PAGE >
SAMPLE PAPERS |
2025 CASE STUDY | THE PERFECT CHATBOT
FLIP CARDS
DESIGNED FOR IB EXAMINATIONS
DESIGNED FOR IB EXAMINATIONS
Q: What is backpropagation?
A: An algorithm used for training neural networks by adjusting weights to minimise error.
Q: Define natural language processing (NLP).
A: A field of AI focused on enabling machines to understand and respond to human language.
Q: What is a dataset?
A: A collection of data used to train and evaluate machine learning models.
Q: What is the purpose of a loss function?
A: To measure the difference between the predicted output and the actual target output.
Q: What does a GPU do in machine learning?
A: It accelerates the processing of large-scale data and complex computations.
Q: What is an RNN?
A: Recurrent Neural Network, designed to handle sequential data.
Q: Define LSTM.
A: Long Short-Term Memory, a type of RNN that handles long-term dependencies.
Q: What is a transformer neural network?
A: A neural network using a self-attention mechanism for parallel processing of data.
Q: What is BPTT?
A: Backpropagation through time, a variant of backpropagation for RNNs.
Q: What is a memory cell state in LSTM?
A: It represents the information flowing through the network, managed by gates.
Q: What is data cleaning?
A: The process of removing irrelevant, duplicate, or noisy data.
Q: What is synthetic data?
A: Artificially generated data used to supplement real data.
Q: What is bias in datasets?
A: Systematic errors that lead to unfair or discriminatory outcomes.
Q: What is sampling bias?
A: When the dataset is not representative of the entire population.
Q: What is selection bias?
A: Bias introduced when data is not randomly selected but chosen based on specific criteria
Q: Why is data privacy important?
A: To protect sensitive personal information from unauthorised access.
Q: What is transparency in AI?
A: Making decision-making processes clear and understandable to users.
Q: How can we prevent misinformation by chatbots?
A: By integrating fact-checking mechanisms.
Q: What is accountability in chatbot ethics?
A: Determining who is responsible for the chatbot’s actions and decisions.
Q: Define ethical use of chatbots.
A: Ensuring chatbots operate fairly, transparently, and responsibly, respecting user privacy.
Q: What is hyperparameter tuning?
A: The process of optimizing parameters that govern model training.
Q: What is a self-attention mechanism?
A: A technique that captures relationships between words in a sequence by computing attention weights.
Q: Define lexical analysis.
A: Breaking down text into individual words and sentences for further processing.
Q: What is syntactical analysis?
A: Analsing the grammatical structure of a sentence to identify relationships between words.
Q: What is semantic analysis?
A: Understanding the meaning of words and sentences beyond their surface structure.
Q: What is model pruning?
A: Removing unnecessary neurons or connections in a neural network to reduce complexity.
Q: What is quantization in machine learning?
A: Reducing the precision of weights to lower bit sizes to enhance model efficiency.
Q: Define knowledge distillation.
A: Transferring knowledge from a larger model to a smaller one to maintain performance while reducing complexity.
Q: What is parallel processing?
A: Dividing tasks into smaller sub-tasks that can be processed simultaneously.
Q: Why is cloud computing used in AI?
A: For scalable and flexible computing resources that can be adjusted based on demand.
Q: What is the primary function of a chatbot?
A: To provide automated responses to user queries using AI and NLP techniques
Q: Define latency in chatbots.
A: The delay between a user's query and the chatbot’s response.
Q: What is discourse integration?
A: Integrating the meaning of a sentence with the larger context of the conversation
Q: What is pragmatic analysis?
A: Analysing the social, legal, and cultural context of a sentence to understand its intended meaning.
Q: Why is contextual understanding important for chatbots
A: To provide coherent and relevant responses based on the broader conversation context.
Q: What is historical bias?
A: Bias that occurs when training data reflects outdated patterns that may not be relevant to current scenarios.
Q: What is labelling bias?
A: When the labels applied to training data are subjective, inaccurate, or incomplete.
Q: What is linguistic bias?
A: Bias resulting from training data that favors certain dialects, vocabularies, or linguistic styles.
Q: How can bias be detected in datasets?
A: By regularly auditing datasets and algorithms for biases and taking corrective actions.
Q: Why is fairness important in chatbot interactions?
A: To ensure equitable service to all users, regardless of their background.
Q: What is a large language model (LLM)?
A: Advanced neural networks trained on vast amounts of text data to understand and generate human-like language.
Q: Define natural language understanding (NLU).
A: A component of NLP focused on understanding the user’s input by analyzing linguistic features and context.
Q: What is pre-processing in data handling?
A: Cleaning, transforming, and reducing data to improve its quality and make it suitable for training.
Q: What is the vanishing gradient problem?
A: A problem in training deep neural networks where gradients become very small, making it difficult to update weights effectively.
Q: What is a tensor processing unit (TPU)?
A: Custom hardware designed specifically to accelerate machine learning workloads.
Q: How does data augmentation help in training chatbots?
A: By generating additional data to increase the size and diversity of the dataset.
Q: What is the role of encryption in data security?
A: To protect data both in transit and at rest from unauthorized access.
Q: How can distributed computing benefit chatbots?
A: By parallelising processing across multiple machines to improve efficiency and reduce latency.
Q: Why is user feedback important for chatbot improvement?
A: It helps identify and correct inaccuracies, continuously improving the chatbot’s performance.
Q: What is explainable AI?
A: Techniques that make the decision-making process of AI systems understandable to users.