Summer Special Flat 65% Limited Time Discount offer - Ends in 0d 00h 00m 00s - Coupon code: netdisc

Confluent CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Exam Practice Test

Page: 1 / 15
Total 150 questions

Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Testing Engine

  • Product Type: Testing Engine
$43.75  $124.99

PDF Study Guide

  • Product Type: PDF Study Guide
$38.5  $109.99
Question 1

Which two producer exceptions are examples of the class RetriableException? (Select two.)

Options:

A.

LeaderNotAvailableException

B.

RecordTooLargeException

C.

AuthorizationException

D.

NotEnoughReplicasException

Question 2

Clients that connect to a Kafka cluster are required to specify one or more brokers in the bootstrap.servers parameter.

What is the primary advantage of specifying more than one broker?

Options:

A.

It provides redundancy in making the initial connection to the Kafka cluster.

B.

It forces clients to enumerate every single broker in the cluster.

C.

It is the mechanism to distribute a topic’s partitions across multiple brokers.

D.

It provides the ability to wake up dormant brokers.

Question 3

Which two producer exceptions are examples of the class RetriableException? (Select two.)

Options:

A.

LeaderNotAvailableException

B.

RecordTooLargeException

C.

AuthorizationException

D.

NotEnoughReplicasException

Question 4

What are two examples of performance metrics?

(Select two.)

Options:

A.

fetch-rate

B.

Number of active users

C.

total-login-attempts

D.

incoming-byte-rate

E.

Number of active user sessions

F.

Time of last failed login

Question 5

Which partition assignment minimizes partition movements between two assignments?

Options:

A.

RoundRobinAssignor

B.

StickyAssignor

C.

RangeAssignor

D.

PartitionAssignor

Question 6

The producer code below features a Callback class with a method called onCompletion().

When will the onCompletion() method be invoked?

Options:

A.

When a consumer sends an acknowledgement to the producer

B.

When the producer puts the message into its socket buffer

C.

When the producer batches the message

D.

When the producer receives the acknowledgment from the broker

Question 7

You are writing a producer application and need to ensure proper delivery. You configure the producer with acks=all.

Which two actions should you take to ensure proper error handling?

(Select two.)

Options:

A.

Use a callback argument in producer.send() where you check delivery status.

B.

Check that producer.send() returned a RecordMetadata object and is not null.

C.

Surround the call of producer.send() with a try/catch block to catch KafkaException.

D.

Check the value of ProducerRecord.status().

Question 8

Which configuration allows more time for the consumer poll to process records?

Options:

A.

session.timeout.ms

B.

heartbeat.interval.ms

C.

max.poll.interval.ms

D.

fetch.max.wait.ms

Question 9

You are building a system for a retail store selling products to customers.

Which three datasets should you model as a GlobalKTable?

(Select three.)

Options:

A.

Inventory of products at a warehouse

B.

All purchases at a retail store occurring in real time

C.

Customer profile information

D.

Log of payment transactions

E.

Catalog of products

Question 10

Which statement is true about how exactly-once semantics (EOS) work in Kafka Streams?

Options:

A.

Kafka Streams disables log compaction on internal changelog topics to preserve all state changes for potential recovery.

B.

EOS in Kafka Streams relies on transactional producers to atomically commit state updates to changelog topics and output records to Kafka.

C.

Kafka Streams provides EOS by periodically checkpointing state stores and replaying changelogs to recover only unprocessed messages during failure.

D.

EOS in Kafka Streams is implemented by creating a separate Kafka topic for deduplication of all messages processed by the application.

Question 11

You need to correctly join data from two Kafka topics.

Which two scenarios will allow for co-partitioning?

(Select two.)

Options:

A.

Both topics have the same number of partitions.

B.

Both topics have the same key and partitioning strategy.

C.

Both topics have the same value schema.

D.

Both topics have the same retention time.

Question 12

Which two statements about Kafka Connect Single Message Transforms (SMTs) are correct?

(Select two.)

Options:

A.

Multiple SMTs can be chained together and act on source or sink messages.

B.

SMTs are often used to join multiple records from a source data system into a single Kafka record.

C.

Masking data is a good example of an SMT.

D.

SMT functionality is included within Kafka Connect converters.

Question 13

You have a topic t1 with six partitions. You use Kafka Connect to send data from topic t1 in your Kafka cluster to Amazon S3. Kafka Connect is configured for two tasks.

How many partitions will each task process?

Options:

A.

2

B.

3

C.

6

D.

12

Question 14

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

    Topic name: DLQ-Topic

    Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Question 15

Your configuration parameters for a Source connector and Connect worker are:

    offset.flush.interval.ms=60000

    offset.flush.timeout.ms=500

    offset.storage.topic=connect-offsets

    offset.storage.replication.factor=-1Which four statements match the expected behavior?(Select four.)

Options:

A.

The connector will wait 60000ms before trying to commit offsets for tasks.

B.

The connector will wait 500ms for offset data to be committed.

C.

The connector will commit offsets to a topic called connect-offsets.

D.

The offsets topic will use the broker default replication factor.

Question 16

You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.

    Your consumers must process these messages with low latency and minimize consumer lag

    Processing takes ~6x longer than producing

    Transactions for each bank account must be processedin orderWhich strategy should you use?

Options:

A.

Use the timestamp of the message's arrival as its key.

B.

Use the bank account number found in the message as the message key.

C.

Use a combination of the bank account number and the transaction timestamp as the message key.

D.

Use a unique identifier such as a universally unique identifier (UUID) as the message key.

Question 17

You are developing a Java application using a Kafka consumer.

You need to integrate Kafka’s client logs with your own application’s logs using log4j2.

Which Java library dependency must you include in your project?

Options:

A.

SLF4J implementation for Log4j 1.2 (org.slf4j:slf4j-log4j12)

B.

SLF4J implementation for Log4j2 (org.apache.logging.log4j:log4j-slf4j-impl)

C.

None, the right dependency will be added by the Kafka client dependency by transitivity.

D.

Just the log4j2 dependency of the application

Question 18

Match the topic configuration setting with the reason the setting affects topic durability.

(You are given settings like unclean.leader.election.enable=false, replication.factor, min.insync.replicas=2)

Options:

Page: 1 / 15
Total 150 questions