Summer Special Flat 65% Limited Time Discount offer - Ends in 0d 00h 00m 00s - Coupon code: netdisc

Amazon Web Services AIF-C01 AWS Certified AI Practitioner Exam Exam Practice Test

Page: 1 / 15
Total 150 questions

AWS Certified AI Practitioner Exam Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$43.75  $124.99

PDF Study Guide

  • Product Type: PDF Study Guide
$38.5  $109.99
Question 1

An accounting firm wants to implement a large language model (LLM) to automate document processing. The firm must proceed responsibly to avoid potential harms.

What should the firm do when developing and deploying the LLM? (Select TWO.)

Options:

A.

Include fairness metrics for model evaluation.

B.

Adjust the temperature parameter of the model.

C.

Modify the training data to mitigate bias.

D.

Avoid overfitting on the training data.

E.

Apply prompt engineering techniques.

Question 2

An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that the custom model does not generate inference responses based on confidential data.

How should the AI practitioner prevent responses based on confidential data?

Options:

A.

Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model.

B.

Mask the confidential data in the inference responses by using dynamic data masking.

C.

Encrypt the confidential data in the inference responses by using Amazon SageMaker.

D.

Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS).

Question 3

An ML research team develops custom ML models. The model artifacts are shared with other teams for integration into products and services. The ML team retains the model training code and data. The ML team wants to builk a mechanism that the ML team can use to audit models.

Which solution should the ML team use when publishing the custom ML models?

Options:

A.

Create documents with the relevant information. Store the documents in Amazon S3.

B.

Use AWS A] Service Cards for transparency and understanding models.

C.

Create Amazon SageMaker Model Cards with Intended uses and training and inference details.

D.

Create model training scripts. Commit the model training scripts to a Git repository.

Question 4

Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?

Options:

A.

Helps decrease the model's complexity

B.

Improves model performance over time

C.

Decreases the training time requirement

D.

Optimizes model inference time

Question 5

A company is developing an ML model to predict customer churn.

Which evaluation metric will assess the model's performance on a binary classification task such as predicting chum?

Options:

A.

F1 score

B.

Mean squared error (MSE)

C.

R-squared

D.

Time used to train the model

Question 6

A company wants to use generative AI to increase developer productivity and software development. The company wants to use Amazon Q Developer.

What can Amazon Q Developer do to help the company meet these requirements?

Options:

A.

Create software snippets, reference tracking, and open-source license tracking.

B.

Run an application without provisioning or managing servers.

C.

Enable voice commands for coding and providing natural language search.

D.

Convert audio files to text documents by using ML models.

Question 7

Which term describes the numerical representations of real-world objects and concepts that AI and natural language processing (NLP) models use to improve understanding of textual information?

Options:

A.

Embeddings

B.

Tokens

C.

Models

D.

Binaries

Question 8

Which AWS service or feature can help an AI development team quickly deploy and consume a foundation model (FM) within the team's VPC?

Options:

A.

Amazon Personalize

B.

Amazon SageMaker JumpStart

C.

PartyRock, an Amazon Bedrock Playground

D.

Amazon SageMaker endpoints

Question 9

A company wants to build an ML model by using Amazon SageMaker. The company needs to share and manage variables for model development across multiple teams.

Which SageMaker feature meets these requirements?

Options:

A.

Amazon SageMaker Feature Store

B.

Amazon SageMaker Data Wrangler

C.

Amazon SageMaker Clarify

D.

Amazon SageMaker Model Cards

Question 10

A company is building an ML model. The company collected new data and analyzed the data by creating a correlation matrix, calculating statistics, and visualizing the data.

Which stage of the ML pipeline is the company currently in?

Options:

A.

Data pre-processing

B.

Feature engineering

C.

Exploratory data analysis

D.

Hyperparameter tuning

Question 11

A company has thousands of customer support interactions per day and wants to analyze these interactions to identify frequently asked questions and develop insights.

Which AWS service can the company use to meet this requirement?

Options:

A.

Amazon Lex

B.

Amazon Comprehend

C.

Amazon Transcribe

D.

Amazon Translate

Question 12

A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short and written in a specific language.

Which solution will align the LLM response quality with the company's expectations?

Options:

A.

Adjust the prompt.

B.

Choose an LLM of a different size.

C.

Increase the temperature.

D.

Increase the Top K value.

Question 13

A company wants to create an application to summarize meetings by using meeting audio recordings.

Select and order the correct steps from the following list to create the application. Each step should be selected one time or not at all. (Select and order THREE.)

• Convert meeting audio recordings to meeting text files by using Amazon Polly.

• Convert meeting audio recordings to meeting text files by using Amazon Transcribe.

• Store meeting audio recordings in an Amazon S3 bucket.

• Store meeting audio recordings in an Amazon Elastic Block Store (Amazon EBS) volume.

• Summarize meeting text files by using Amazon Bedrock.

• Summarize meeting text files by using Amazon Lex.

Options:

Question 14

A company manually reviews all submitted resumes in PDF format. As the company grows, the company expects the volume of resumes to exceed the company's review capacity. The company needs an automated system to convert the PDF resumes into plain text format for additional processing.

Which AWS service meets this requirement?

Options:

A.

Amazon Textract

B.

Amazon Personalize

C.

Amazon Lex

D.

Amazon Transcribe

Question 15

A social media company wants to use a large language model (LLM) for content moderation. The company wants to evaluate the LLM outputs for bias and potential discrimination against specific groups or individuals.

Which data source should the company use to evaluate the LLM outputs with the LEAST administrative effort?

Options:

A.

User-generated content

B.

Moderation logs

C.

Content moderation guidelines

D.

Benchmark datasets

Question 16

Which phase of the ML lifecycle determines compliance and regulatory requirements?

Options:

A.

Feature engineering

B.

Model training

C.

Data collection

D.

Business goal identification

Question 17

What does an F1 score measure in the context of foundation model (FM) performance?

Options:

A.

Model precision and recall.

B.

Model speed in generating responses.

C.

Financial cost of operating the model.

D.

Energy efficiency of the model's computations.

Question 18

Which technique involves training AI models on labeled datasets to adapt the models to specific industry terminology and requirements?

Options:

A.

Data augmentation

B.

Fine-tuning

C.

Model quantization

D.

Continuous pre-training

Question 19

An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the user who has asked the question.

Which solution meets these requirements with the LEAST implementation effort?

Options:

A.

Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support.

B.

Add a role description to the prompt context that instructs the model of the age range that the response should target.

C.

Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user.

D.

Summarize the response text depending on the age of the user so that younger users receive shorter responses.

Question 20

A security company is using Amazon Bedrock to run foundation models (FMs). The company wants to ensure that only authorized users invoke the models. The company needs to identify any unauthorized access attempts to set appropriate AWS Identity and Access Management (IAM) policies and roles for future iterations of the FMs.

Which AWS service should the company use to identify unauthorized users that are trying to access Amazon Bedrock?

Options:

A.

AWS Audit Manager

B.

AWS CloudTrail

C.

Amazon Fraud Detector

D.

AWS Trusted Advisor

Question 21

A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months.

Which AWS solution should the company use to automate the generation of graphs?

Options:

A.

Amazon Q in Amazon EC2

B.

Amazon Q Developer

C.

Amazon Q in Amazon QuickSight

D.

Amazon Q in AWS Chatbot

Question 22

A company is implementing intelligent agents to provide conversational search experiences for its customers. The company needs a database service that will support storage and queries of embeddings from a generative AI model as vectors in the database.

Which AWS service will meet these requirements?

Options:

A.

Amazon Athena

B.

Amazon Aurora PostgreSQL

C.

Amazon Redshift

D.

Amazon EMR

Question 23

A company is building a new generative AI chatbot. The chatbot uses an Amazon Bedrock foundation model (FM) to generate responses. During testing, the company notices that the chatbot is prone to prompt injection attacks.

What can the company do to secure the chatbot with the LEAST implementation effort?

Options:

A.

Fine-tune the FM to avoid harmful responses.

B.

Use Amazon Bedrock Guardrails content filters and denied topics.

C.

Change the FM to a more secure FM.

D.

Use chain-of-thought prompting to produce secure responses.

Question 24

Which component of Amazon Bedrock Studio can help secure the content that AI systems generate?

Options:

A.

Access controls

B.

Function calling

C.

Guardrails

D.

Knowledge bases

Question 25

A company is using the Generative AI Security Scoping Matrix to assess security responsibilities for its solutions. The company has identified four different solution scopes based on the matrix.

Which solution scope gives the company the MOST ownership of security responsibilities?

Options:

A.

Using a third-party enterprise application that has embedded generative AI features.

B.

Building an application by using an existing third-party generative AI foundation model (FM).

C.

Refining an existing third-party generative AI foundation model (FM) by fine-tuning the model by using data specific to the business.

D.

Building and training a generative AI model from scratch by using specific data that a customer owns.

Question 26

A company wants to build a lead prioritization application for its employees to contact potential customers. The application must give employees the ability to view and adjust the weights assigned to different variables in the model based on domain knowledge and expertise.

Which ML model type meets these requirements?

Options:

A.

Logistic regression model

B.

Deep learning model built on principal components

C.

K-nearest neighbors (k-NN) model

D.

Neural network

Question 27

A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.

Which action will reduce these risks?

Options:

A.

Create a prompt template that teaches the LLM to detect attack patterns.

B.

Increase the temperature parameter on invocation requests to the LLM.

C.

Avoid using LLMs that are not listed in Amazon SageMaker.

D.

Decrease the number of input tokens on invocations of the LLM.

Question 28

Which metric measures the runtime efficiency of operating AI models?

Options:

A.

Customer satisfaction score (CSAT)

B.

Training time for each epoch

C.

Average response time

D.

Number of training instances

Question 29

An AI practitioner wants to use a foundation model (FM) to design a search application. The search application must handle queries that have text and images.

Which type of FM should the AI practitioner use to power the search application?

Options:

A.

Multi-modal embedding model

B.

Text embedding model

C.

Multi-modal generation model

D.

Image generation model

Question 30

A company wants to create an application by using Amazon Bedrock. The company has a limited budget and prefers flexibility without long-term commitment.

Which Amazon Bedrock pricing model meets these requirements?

Options:

A.

On-Demand

B.

Model customization

C.

Provisioned Throughput

D.

Spot Instance

Question 31

Which option is a use case for generative AI models?

Options:

A.

Improving network security by using intrusion detection systems

B.

Creating photorealistic images from text descriptions for digital marketing

C.

Enhancing database performance by using optimized indexing

D.

Analyzing financial data to forecast stock market trends

Question 32

A company wants to develop an educational game where users answer questions such as the following: "A jar contains six red, four green, and three yellow marbles. What is the probability of choosing a green marble from the jar?"

Which solution meets these requirements with the LEAST operational overhead?

Options:

A.

Use supervised learning to create a regression model that will predict probability.

B.

Use reinforcement learning to train a model to return the probability.

C.

Use code that will calculate probability by using simple rules and computations.

D.

Use unsupervised learning to create a model that will estimate probability density.

Question 33

What does an F1 score measure in the context of foundation model (FM) performance?

Options:

A.

Model precision and recall

B.

Model speed in generating responses

C.

Financial cost of operating the model

D.

Energy efficiency of the model's computations

Question 34

An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect.

Which problem is the LLM having?

Options:

A.

Data leakage

B.

Hallucination

C.

Overfitting

D.

Underfitting

Question 35

A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly.

What should the company do to mitigate this problem?

Options:

A.

Reduce the volume of data that is used in training.

B.

Add hyperparameters to the model.

C.

Increase the volume of data that is used in training.

D.

Increase the model training time.

Question 36

A media company wants to analyze viewer behavior and demographics to recommend personalized content. The company wants to deploy a customized ML model in its production environment. The company also wants to observe if the model quality drifts over time.

Which AWS service or feature meets these requirements?

Options:

A.

Amazon Rekognition

B.

Amazon SageMaker Clarify

C.

Amazon Comprehend

D.

Amazon SageMaker Model Monitor

Question 37

A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.

Which model evaluation strategy meets these requirements?

Options:

A.

Bilingual Evaluation Understudy (BLEU)

B.

Root mean squared error (RMSE)

C.

Recall-Oriented Understudy for Gisting Evaluation (ROUGE)

D.

F1 score

Question 38

A company has documents that are missing some words because of a database error. The company wants to build an ML model that can suggest potential words to fill in the missing text.

Which type of model meets this requirement?

Options:

A.

Topic modeling

B.

Clustering models

C.

Prescriptive ML models

D.

BERT-based models

Question 39

Which technique breaks a complex task into smaller subtasks that are sent sequentially to a large language model (LLM)?

Options:

A.

One-shot prompting

B.

Prompt chaining

C.

Tree of thoughts

D.

Retrieval Augmented Generation (RAG)

Question 40

A company is building a chatbot to improve user experience. The company is using a large language model (LLM) from Amazon Bedrock for intent detection. The company wants to use few-shot learning to improve intent detection accuracy.

Which additional data does the company need to meet these requirements?

Options:

A.

Pairs of chatbot responses and correct user intents

B.

Pairs of user messages and correct chatbot responses

C.

Pairs of user messages and correct user intents

D.

Pairs of user intents and correct chatbot responses

Question 41

A company is implementing the Amazon Titan foundation model (FM) by using Amazon Bedrock. The company needs to supplement the model by using relevant data from the company's private data sources.

Which solution will meet this requirement?

Options:

A.

Use a different FM

B.

Choose a lower temperature value

C.

Create an Amazon Bedrock knowledge base

D.

Enable model invocation logging

Question 42

An ecommerce company wants to improve search engine recommendations by customizing the results for each user of the company's ecommerce platform. Which AWS service meets these requirements?

Options:

A.

Amazon Personalize

B.

Amazon Kendra

C.

Amazon Rekognition

D.

Amazon Transcribe

Question 43

A research company implemented a chatbot by using a foundation model (FM) from Amazon Bedrock. The chatbot searches for answers to questions from a large database of research papers.

After multiple prompt engineering attempts, the company notices that the FM is performing poorly because of the complex scientific terms in the research papers.

How can the company improve the performance of the chatbot?

Options:

A.

Use few-shot prompting to define how the FM can answer the questions.

B.

Use domain adaptation fine-tuning to adapt the FM to complex scientific terms.

C.

Change the FM inference parameters.

D.

Clean the research paper data to remove complex scientific terms.

Question 44

A company is using a pre-trained large language model (LLM) to extract information from documents. The company noticed that a newer LLM from a different provider is available on Amazon Bedrock. The company wants to transition to the new LLM on Amazon Bedrock.

What does the company need to do to transition to the new LLM?

Options:

A.

Create a new labeled dataset

B.

Perform feature engineering.

C.

Adjust the prompt template.

D.

Fine-tune the LLM.

Question 45

A bank is building a chatbot to answer customer questions about opening a bank account. The chatbot will use public bank documents to generate responses. The company will use Amazon Bedrock and prompt engineering to improve the chatbot's responses.

Which prompt engineering technique meets these requirements?

Options:

A.

Complexity-based prompting

B.

Zero-shot prompting

C.

Few-shot prompting

D.

Directional stimulus prompting

Page: 1 / 15
Total 150 questions