If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
The correct answer isA: /var/log/pods/. When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is/var/log/pods/, optionA.
=========
Which of the following observability data streams would be most useful when desiring to plot resource consumption and predicted future resource exhaustion?
stdout
Traces
Logs
Metrics
The correct answer isD: Metrics. Metrics are numeric time-series measurements collected at regular intervals, making them ideal for plotting resource consumption over time and forecasting future exhaustion. In Kubernetes, this includes CPU usage, memory usage, disk I/O, network throughput, filesystem usage, Pod restarts, and node allocatable vs requested resources. Because metrics are structured and queryable (often with Prometheus), you can compute rates, aggregates, percentiles, and trends, and then apply forecasting methods to predict when a resource will run out.
Logs and traces have different purposes.Logsare event records (strings) that are great for debugging and auditing, but they are not naturally suited to continuous quantitative plotting unless you transform them into metrics (log-based metrics).Tracescapture end-to-end request paths and latency breakdowns; they help you find slow spans and dependency bottlenecks, not forecast CPU/memory exhaustion.stdoutis just a stream where logs might be written; by itself it’s not an observability data type used for capacity trending.
In Kubernetes observability stacks, metrics are typically scraped from components and workloads: kubelet/cAdvisor exports container metrics, node exporters expose host metrics, and applications expose business/system metrics. The metrics pipeline (Prometheus, OpenTelemetry metrics, managed monitoring) enables dashboards and alerting. For resource exhaustion, you often alert on “time to fill” (e.g., predicted disk fill in < N hours), high sustained utilization, or rapidly increasing error rates due to throttling.
Therefore, the most appropriate data stream for plotting consumption and predicting exhaustion isMetrics, optionD.
=========
What is Serverless computing?
A computing method of providing backend services on an as-used basis.
A computing method of providing services for AI and ML operating systems.
A computing method of providing services for quantum computing operating systems.
A computing method of providing services for cloud computing operating systems.
Serverless computingis a cloud execution model where the provider manages infrastructure concerns and you consume compute as a service, typically billed based on actual usage (requests, execution time, memory), which matchesA. In other words, you deploy code (functions) or sometimes containers, configure triggers (HTTP events, queues, schedules), and the platform automatically provisions capacity, scales it up/down, and handles much of availability and fault tolerance behind the scenes.
From a cloud-native architecture standpoint, “serverless” doesn’t mean there are no servers; it meansdevelopers don’t manage servers. The platform abstracts away node provisioning, OS patching, and much of runtime scaling logic. This aligns with the “as-used basis” phrasing: you pay for what you run rather than maintaining always-on capacity.
It’s also useful to distinguish serverless from Kubernetes. Kubernetes automates orchestration (scheduling, self-healing, scaling), but operating Kubernetes still involves cluster-level capacity decisions, node pools, upgrades, networking baseline, and policy. With serverless, those responsibilities are pushed further toward the provider/platform. Kubernetes canenableserverless experiences (for example, event-driven autoscaling frameworks), but serverless as a model is about a higher level of abstraction than “orchestrate containers yourself.”
Options B, C, and D are incorrect because they describe specialized or vague “operating system” services rather than the commonly accepted definition. Serverless is not specifically about AI/ML OSs or quantum OSs; it’s a general compute delivery model that can host many kinds of workloads.
Therefore, the correct definition in this question isA: providing backend services on an as-used basis.
=========
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
Rook
Kubernetes
Helm
Container Storage Interface (CSI)
Rookis a Kubernetesstorage operatorthat helps manage and automate storage systems in a Kubernetes-native way, soAis correct. The key phrase in the question is “storage operator … self-scale, self-heal.” Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing” such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question: “Kubernetes” is the orchestrator itself, not a storage operator. “Helm” is a package manager for Kubernetes apps—it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI” (Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator” with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,”Rookis the correct choice.
=========
Which authorization-mode allows granular control over the operations that different entities can perform on different objects in a Kubernetes cluster?
Webhook Mode Authorization Control
Role Based Access Control
Node Authorization Access Control
Attribute Based Access Control
Role Based Access Control (RBAC)is the standard Kubernetes authorization mode that provides granular control over what users and service accounts can do to which resources, soBis correct. RBAC works by definingRoles(namespaced) andClusterRoles(cluster-wide) that contain sets of rules. Each rule specifies API groups, resource types, resource names (optional), and allowed verbs such as get, list, watch, create, update, patch, and delete. You then attach these roles to identities usingRoleBindingsorClusterRoleBindings.
This gives fine-grained, auditable access control. For example, you can allow a CI service account to create and patch Deployments only in a specific namespace, while restricting it from reading Secrets. You can allow developers to view Pods and logs but prevent them from changing cluster-wide networking resources. This is exactly the “granular control over operations on objects” described by the question.
Why other options are not the best answer: “Webhook mode” is an authorization mechanism where Kubernetes calls an external service to decide authorization. While itcanbe granular depending on the external system, Kubernetes’ common built-in answer for granular object-level control is RBAC. “Node authorization” is a specialized authorizer for kubelets/nodes to access resources they need; it’s not the general-purpose system for all cluster entities. ABAC (Attribute-Based Access Control) is an older mechanism and is not the primary recommended authorization model; it can be expressive but is less commonly used and not the default best-practice for Kubernetes authorization today.
In Kubernetes security practice, RBAC is typically paired with authentication (certs/OIDC), admission controls, and namespaces to build a defense-in-depth security posture. RBAC policy is also central to least privilege: granting only what is necessary for a workload or user role to function. This reduces blast radius if credentials are compromised.
Therefore, the verified answer isB: Role Based Access Control.
=========
Why is Cloud-Native Architecture important?
Cloud Native Architecture revolves around containers, microservices and pipelines.
Cloud Native Architecture removes constraints to rapid innovation.
Cloud Native Architecture is modern for application deployment and pipelines.
Cloud Native Architecture is a bleeding edge technology and service.
Cloud-native architecture is important because it enables organizations to build and run software in a way that supportsrapid innovationwhile maintaining reliability, scalability, and efficient operations. OptionBbest captures this: cloud native removes constraints to rapid innovation, soBis correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches—containers, declarative APIs, automation, and microservices-friendly patterns—reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explainwhyit matters; it lists ingredients rather than the benefit. Option C is vague (“modern”) and again doesn’t capture the core value proposition. Option D is incorrect because cloud native is not primarily about being “bleeding edge”—it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability—exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
=========
What is a DaemonSet?
It’s a type of workload that ensures a specific set of nodes run a copy of a Pod.
It’s a type of workload responsible for maintaining a stable set of replica Pods running in any node.
It’s a type of workload that needs to be run periodically on a given schedule.
It’s a type of workload that provides guarantees about ordering, uniqueness, and identity of a set of Pods.
ADaemonSetensures that a copy of a Pod runs oneach node(or a selected subset of nodes), which matches optionAand makes it correct. DaemonSets are ideal for node-level agents that should exist everywhere, such as log shippers, monitoring agents, CNI components, storage daemons, and security scanners.
DaemonSets differ from Deployments/ReplicaSets because their goal is not “N replicas anywhere,” but “one replica per node” (subject to node selection). When nodes are added to the cluster, the DaemonSet controller automatically schedules the DaemonSet Pod onto the new nodes. When nodes are removed, the Pods associated with those nodes are cleaned up. You can restrict placement using node selectors, affinity rules, or tolerations so that only certain nodes run the DaemonSet (for example, only Linux nodes, only GPU nodes, or only nodes with a dedicated label).
Option B sounds like a ReplicaSet/Deployment behavior (stable set of replicas), not a DaemonSet. Option C describes CronJobs (scheduled, recurring run-to-completion workloads). Option D describes StatefulSets, which provide stable identity, ordering, and uniqueness guarantees for stateful replicas.
Operationally, DaemonSets matter because they often run critical cluster services. During maintenance and upgrades, DaemonSet update strategy determines how those node agents roll out across the fleet. Since DaemonSets can tolerate taints (like master/control-plane node taints), they can also be used to ensure essential agents run across all nodes, including special pools. Thus, the correct definition isA.
=========
Which one of the following is an open source runtime security tool?
lxd
containerd
falco
gVisor
The correct answer isC: Falco.Falcois a widely used open-source runtime security tool (originally created by Sysdig and now a CNCF project) designed to detect suspicious behavior at runtime by monitoring system calls and other kernel-level signals. In Kubernetes environments, Falco helps identify threats such as unexpected shell access in containers, privilege escalation attempts, access to sensitive files, anomalous network tooling, crypto-mining patterns, and other behaviors that indicate compromise or policy violations.
The other options are not primarily “runtime security tools” in the detection/alerting sense:
containerdis a container runtime responsible for executing containers; it’s not a security detection tool.
lxdis a system container and VM manager; again, not a runtime threat detection tool.
gVisoris a sandboxed container runtime that improves isolation by interposing a user-space kernel; it’s a security mechanism, but the question asks for a runtime securitytool(monitoring/detection). Falco fits that definition best.
In cloud-native security practice, Falco typically runs as a DaemonSet so it can observe activity on every node. It uses rules to define what “bad” looks like and can emit alerts to SIEM systems, logging backends, or incident response workflows. This complements preventative controls like RBAC, Pod Security Admission, seccomp, and least privilege configurations. Preventative controls reduce risk; Falco provides visibility and detection when something slips through.
Therefore, among the provided choices, the verified runtime security tool isFalco (C).
=========
In the Kubernetes platform, which component is responsible for running containers?
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
In Kubernetes, the actual act ofrunning containerson a node is performed by thecontainer runtime. The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided,CRI-Ois the only container runtime, soBis correct.
It’s important to be precise: the component that “runs containers” is not the control plane and not etcd.etcd(option A) stores cluster state (API objects) as the backing datastore. It never runs containers.cloud-controller-manager(option C) integrates with cloud APIs for infrastructure like load balancers and nodes.kube-controller-manager(option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox” and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime” is the most general answer, the question’s option list makesCRI-Othe correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
What element allows Kubernetes to run Pods across the fleet of nodes?
The node server.
The etcd static pods.
The API server.
The kubelet.
The correct answer isD (the kubelet)because the kubelet is thenode agentresponsible for actually running Pods on each node. Kubernetes can orchestrate workloads across many nodes because every worker node (and control-plane node that runs workloads) runs a kubelet that continuously watches the API server for PodSpecs assigned to that node and then ensures the containers described by those PodSpecs are started and kept running. In other words, the kube-scheduler decideswherea Pod should run (sets spec.nodeName), but the kubelet is what makes the Podrunon that chosen node.
The kubelet integrates with the container runtime (via CRI) to pull images, create sandboxes, start containers, and manage their lifecycle. It also reports node and Pod status back to the control plane, executes liveness/readiness/startup probes, mounts volumes, and performs local housekeeping that keeps the node aligned with the declared desired state. This node-level reconciliation loop is a key Kubernetes pattern: the control plane declares intent, and the kubelet enforces it on the node.
Option C (API server) is critical but does not run Pods; it is the control plane’s front door for storing and serving cluster state. Option A (“node server”) is not a Kubernetes component. Option B (etcd static pods) is a misunderstanding: etcd is the datastore for Kubernetes state and may run as static Pods in some installations, but it is not the mechanism that runs user workloads across nodes.
So, Kubernetes runs Pods “across the fleet” because each node has a kubelet that can realize scheduled PodSpecs locally and keep them healthy over time.
=========
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
Prometheus and the prometheus-adapter.
Graylog and graylog-autoscaler metrics.
Graylog and the kubernetes-adapter.
Grafana and Prometheus.
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetescustom metrics(or external metrics) API. A common and Kubernetes-documented approach isPrometheus + prometheus-adapter, makingAcorrect. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. Theprometheus-adapterthen translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options:Grafanais a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices isPrometheus and the prometheus-adapter, optionA.
=========
How can you extend the Kubernetes API?
Adding a CustomResourceDefinition or implementing an aggregation layer.
Adding a new version of a resource, for instance v4beta3.
With the command kubectl extend api, logged in as an administrator.
Adding the desired API object as a kubelet parameter.
Ais correct: Kubernetes’ API can be extended by addingCustomResourceDefinitions (CRDs)and/or by implementing theAPI Aggregation Layer. These are the two canonical extension mechanisms.
CRDslet you define new resource types (new kinds) that the Kubernetes API server stores in etcd and serves like native objects. You typically pair a CRD with a controller/operator that watches those custom objects and reconciles real resources accordingly. This pattern is foundational to the Kubernetes ecosystem (many popular add-ons install CRDs).
Theaggregation layerallows you to add entire API services (aggregated API servers) that serve additional endpoints under the Kubernetes API. This is used when you want custom API behavior, custom storage, or specialized semantics beyond what CRDs provide (or when implementing APIs like metrics APIs historically).
Why the other answers are wrong:
Bis not how API extension works. You don’t “extend the API” by inventing new versions like v4beta3; versions are defined and implemented by API servers/controllers, not by users arbitrarily.
Cis fictional; there is no standard kubectl extend api command.
Dis also incorrect; kubelet parameters configure node agent behavior, not API server types and discovery.
So, the verified ways to extend Kubernetes’ API surface areCRDs and API aggregation, which is optionA.
=========
What factors influence the Kubernetes scheduler when it places Pods on nodes?
Pod memory requests, node taints, and Pod affinity.
Pod labels, node labels, and request labels.
Node taints, node level, and Pod priority.
Pod priority, container command, and node labels.
The Kubernetes scheduler chooses a node for a Pod by evaluating scheduling constraints and cluster state. Key inputs includeresource requests(CPU/memory),taints/tolerations, andaffinity/anti-affinityrules. OptionAdirectly names three real, high-impact scheduling factors—Pod memory requests, node taints, and Pod affinity—soAis correct.
Resource requests are fundamental: the scheduler must ensure the target node has enough allocatable CPU/memory to satisfy the Pod’s requests. Requests (not limits) drive placement decisions. Taints on nodes repel Pods unless the Pod has a matching toleration, which is commonly used to reserve nodes for special workloads (GPU nodes, system nodes, restricted nodes) or to protect nodes under certain conditions. Affinity and anti-affinity allow expressing “place me near” or “place me away” rules—e.g., keep replicas spread across failure domains or co-locate components for latency.
Option B includes labels, which do matter, but “request labels” is not a standard scheduler concept; labels influence scheduling mainly through selectors and affinity, not as a direct category called “request labels.” Option C mixes a real concept (taints, priority) with “node level,” which isn’t a standard scheduling factor term. Option D includes “container command,” which does not influence scheduling; the scheduler does not care what command the container runs, only placement constraints and resources.
Under the hood, kube-scheduler uses a two-phase process (filtering then scoring) to select a node, but the inputs it filters/scores include exactly the kinds of constraints in A. Therefore, the verified best answer isA.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
The correct answer isD: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that useboth IPv4 and IPv6addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.” Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.” NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enablesdual-stack Services, which is optionD.
=========
What is the core metric type in Prometheus used to represent a single numerical value that can go up and down?
Summary
Counter
Histogram
Gauge
In Prometheus, aGaugerepresents a single numerical value that canincrease and decreaseover time, which makesDthe correct answer. Gauges are used for values like current memory usage, number of in-flight requests, queue depth, temperature, or CPU usage—anything that can move up and down.
This contrasts with aCounter, which is strictly monotonically increasing (it only goes up, except for resets when a process restarts). Counters are ideal for cumulative totals like total HTTP requests served, total errors, or bytes transmitted. Histograms and Summaries are used to capture distributions (often latency distributions), providing bucketed counts (histogram) or quantile approximations (summary), and are not the “single value that goes up and down” primitive the question asks for.
In Kubernetes observability, metrics are a primary signal for understanding system health and performance. Prometheus is widely used to scrape metrics from Kubernetes components (kubelet, API server, controller-manager), cluster add-ons, and applications. Gauges are common for resource utilization metrics and for instantaneous states, such as container_memory_working_set_bytes or go_goroutines.
When you build alerting and dashboards, selecting the right metric type matters. For example, if you want to alert on thecurrentmemory usage, a gauge is appropriate. If you want to compute request rates, you typically use counters with Prometheus functions like rate() to derive per-second rates. Histograms and summaries are used when you need latency percentiles or distribution analysis.
So, for “a single numerical value that can go up and down,” the correct Prometheus metric type isGauge (D).
=========
What is a Kubernetes Service Endpoint?
It is the API endpoint of our Kubernetes cluster.
It is a name of special Pod in kube-system namespace.
It is an IP address that we can access from the Internet.
It is an object that gets IP addresses of individual Pods assigned to it.
A Kubernetes Service routes traffic to a dynamic set of backends (usually Pods). The set of backend IPs and ports is represented by endpoint-tracking resources. Historically this was theEndpointsobject; today Kubernetes commonly usesEndpointSlicefor scalability, but the concept remains the same: endpoints represent the concrete network destinations behind a Service. That’s whyDis correct: a Service endpoint is an object that contains the IP addresses (and ports) of the individual Pods (or other backends) associated with that Service.
When a Service has a selector, Kubernetes automatically maintains endpoints by watching which Pods match the selector and are Ready, then publishing those Pod IPs into Endpoints/EndpointSlices. Consumers don’t usually use endpoints directly; instead they call the Service DNS name, and kube-proxy (or an alternate dataplane) forwards traffic to one of the endpoints. Still, endpoints are critical because they are what make Service routing accurate and up to date during scaling events, rolling updates, and failures.
Option A confuses this with the Kubernetes API server endpoint (the cluster API URL). Option B is incorrect; there’s no special “Service Endpoint Pod.” Option C describes an external/public IP concept, which may exist for LoadBalancer Services, but “Service endpoint” in Kubernetes vocabulary is about the backend destinations, not the public entrypoint.
Operationally, endpoints are useful for debugging: if a Service isn’t routing traffic, checking Endpoints/EndpointSlices shows whether the Service actually has backends and whether readiness is excluding Pods. This ties directly into Kubernetes service discovery and load balancing: the Service is the stable front door; endpoints are the actual backends.
=========
Which of the following scenarios would benefit the most from a service mesh architecture?
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s whyDis the best fit:thousands of distributed applications across multiple clusters. In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are:mTLSfor service identity and encryption, fine-grainedtraffic policy(retries, timeouts, circuit breaking),traffic shifting(canary, mirroring), and consistenttelemetry(metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but addingmultiple clustersincreases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint—D.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration areGitOps controllers, and the best match here isFlux and ArgoCD, soAis correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore,Ais the verified correct answer.
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members)is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining aquorum(majority) of members to continue serving writes reliably. With3 members, the cluster can tolerate1 failureand still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not theminimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is3 members, which corresponds to optionD.
=========
The Kubernetes project work is carried primarily by SIGs. What does SIG stand for?
Special Interest Group
Software Installation Guide
Support and Information Group
Strategy Implementation Group
In Kubernetes governance and project structure,SIGstands forSpecial Interest Group, soAis correct. Kubernetes is a large open source project under the Cloud Native Computing Foundation (CNCF), and its work is organized into groups that focus on specific domains—such as networking, storage, node, scheduling, security, docs, testing, and many more. SIGs provide a scalable way to coordinate contributors, prioritize work, review design proposals (KEPs), triage issues, and manage releases in their area.
Each SIG typically has regular meetings, mailing lists, chat channels, and maintainers who guide the direction of that part of the project. For example, SIG Network focuses on Kubernetes networking architecture and components, SIG Storage on storage APIs and CSI integration, and SIG Scheduling on scheduler behavior and extensibility. This structure helps Kubernetes evolve while maintaining quality, review rigor, and community-driven decision making.
The other options are not part of Kubernetes project terminology. “Software Installation Guide” and the others might sound plausible, but they are not how Kubernetes defines SIGs.
Understanding SIGs matters operationally because many Kubernetes features and design changes originate from SIGs. When you read Kubernetes enhancement proposals, release notes, or documentation, you’ll often see SIG ownership and references. In short,SIGs are the primary organizational units for Kubernetes engineering and stewardship, and SIG =Special Interest Group.
What sentence is true about CronJobs in Kubernetes?
A CronJob creates one or multiple Jobs on a repeating schedule.
A CronJob creates one container on a repeating schedule.
CronJobs are useful on Linux but are obsolete in Kubernetes.
The CronJob schedule format is different in Kubernetes and Linux.
The true statement isA: a KubernetesCronJobcreatesJobson a repeating schedule. CronJob is a controller designed for time-based execution. You define a schedule using standard cron syntax (minute, hour, day-of-month, month, day-of-week), and when the schedule triggers, the CronJob controller creates aJobobject. Then the Job controller creates one or more Pods to run the task to completion.
Option B is incorrect because CronJobs do not “create one container”; they create Jobs, and Jobs create Pods (which may contain one or multiple containers). Option C is wrong because CronJobs are a core Kubernetes workload primitive for recurring tasks and remain widely used for periodic work like backups, batch processing, and cleanup. Option D is wrong because Kubernetes CronJobs intentionally use cron-like scheduling expressions; the format aligns with the cron concept (with Kubernetes-specific controller behavior around missed runs, concurrency, and history).
CronJobs also provide operational controls you don’t get from plain Linux cron on a node:
concurrencyPolicy (Allow/Forbid/Replace) to manage overlapping runs
startingDeadlineSeconds to control how missed schedules are handled
history limits for successful/failed Jobs to avoid clutter
integration with Kubernetes RBAC, Secrets, ConfigMaps, and volumes for consistent runtime configuration
consistent execution environment via container images, not ad-hoc node scripts
Because the CronJob creates Jobs as first-class API objects, you get observability (events/status), predictable retries, and lifecycle management. That’s why the accurate statement isA.
=========
Which of the following is a feature Kubernetes provides by default as a container orchestration tool?
A portable operating system.
File system redundancy.
A container image registry.
Automated rollouts and rollbacks.
Kubernetes providesautomated rollouts and rollbacksfor workloads by default (via controllers like Deployments), soDis correct. In Kubernetes, application delivery is controller-driven: you declare the desired state (new image, new config), and controllers reconcile the cluster toward that state. Deployments implement rolling updates, gradually replacing old Pods with new ones while respecting availability constraints. Kubernetes tracks rollout history and supports rollback to previous ReplicaSets when an update fails or is deemed unhealthy.
This is a core orchestration capability: it reduces manual intervention and makes change safer. Rollouts use readiness checks and update strategies to avoid taking the service down, and kubectl rollout status/history/undo supports day-to-day release operations.
The other options are not “default Kubernetes orchestration features”:
Kubernetes is not aportable operating system(A). It’s a platform for orchestrating containers on top of an OS.
Kubernetes does not providefilesystem redundancyby itself (B). Storage redundancy is handled by underlying storage systems and CSI drivers (e.g., replicated block storage, distributed filesystems).
Kubernetes does not include a built-incontainer image registry(C). You use external registries (Docker Hub, ECR, GCR, Harbor, etc.). Kubernetes pulls images but does not host them as a core feature.
So the correct “provided by default” orchestration feature in this list is the ability to safely manage application updates viaautomated rollouts and rollbacks.
=========
What is the default deployment strategy in Kubernetes?
Rolling update
Blue/Green deployment
Canary deployment
Recreate deployment
For KubernetesDeployments, the default update strategy isRollingUpdate, which corresponds to “Rolling update” in optionA. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services. Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer isA: Rolling update.
=========
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
Dis correct. Ingress is a Kubernetes API object that defines rules forexternal access to HTTP/HTTPS servicesin a cluster. The defining capability is Layer 7 routing—commonlyhost-basedandpath-basedrouting—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,” that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only thespecificationof routing rules. AnIngress controller(e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
Asuggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
BandCdescribe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matchesD.
=========
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
ACI/CD pipelineis a core practice/tooling approach that enables organizations to deliver softwarefaster and more securely, soDis correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer isD.
=========
What are the characteristics for building every cloud-native application?
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed isResiliency, Agility, Operability, Observability, makingDcorrect.
Resiliencymeans the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agilityreflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operabilityis how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observabilitymeans you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
How to load and generate data required before the Pod startup?
Use an init container with shared file storage.
Use a PVC volume.
Use a sidecar container with shared volume.
Use another Pod with a PVC.
The Kubernetes-native mechanism to run setup stepsbeforethe main application containers start is aninit container, soAis correct. Init containers run sequentially and must complete successfully before the regular containers in the Pod are started. This makes them ideal for preparing configuration, downloading artifacts, performing migrations, generating files, or waiting for dependencies.
The question specifically asks how to “load and generate data required before Pod startup.” The most common pattern is: an init container writes files into a shared volume (like an emptyDir volume) mounted by both the init container and the app container. When the init container finishes, the app container starts and reads the generated files. This is deterministic and aligns with Kubernetes Pod lifecycle semantics.
A sidecar container (option C) runsconcurrentlywith the main container, so it is not guaranteed to complete work before startup. Sidecars are great for ongoing concerns (log shipping, proxies, config reloaders), but they are not the primary “before startup” mechanism. A PVC volume (option B) is just storage; it doesn’t itself perform generation or ensure ordering. “Another Pod with a PVC” (option D) introduces coordination complexity and still does not guarantee the data is prepared before this Pod starts unless you build additional synchronization.
Init containers are explicitly designed for this kind of pre-flight work, and Kubernetes guarantees ordering: all init containers complete in order, then the app containers begin. That guarantee is whyAis the best and verified answer.
Which of the following sentences is true about namespaces in Kubernetes?
You can create a namespace within another namespace in Kubernetes.
You can create two resources of the same kind and name in a namespace.
The default namespace exists when a new cluster is created.
All the objects in the cluster are namespaced by default.
The true statement isC: the default namespace exists when a new cluster is created. Namespaces are a Kubernetes mechanism for partitioning cluster resources into logical groups. When you set up a cluster, Kubernetes creates some initial namespaces (including default, and commonly kube-system, kube-public, and kube-node-lease). The default namespace is where resources go if you don’t specify a namespace explicitly.
Option A is false because namespaces are not hierarchical; Kubernetes does not support “namespaces inside namespaces.” Option B is false because within a given namespace, resource names must be unique per resource kind. You can’t have two Deployments with the same name in the same namespace. Youcanhave a Deployment named web in one namespace and another Deployment named web in a different namespace—namespaces provide that scope boundary. Option D is false because not all objects are namespaced. Many resources arecluster-scoped(for example, Nodes, PersistentVolumes, ClusterRoles, ClusterRoleBindings, and StorageClasses). Namespaces apply only to namespaced resources.
Operationally, namespaces support multi-tenancy and environment separation (dev/test/prod), RBAC scoping, resource quotas, and policy boundaries. For example, you can grant a team access only to their namespace and enforce quotas that prevent them from consuming excessive CPU/memory. Namespaces also make organization and cleanup easier: deleting a namespace removes most namespaced resources inside it (subject to finalizers).
So, the verified correct statement isC: the default namespace exists upon cluster creation.
=========
What default level of protection is applied to the data in Secrets in the Kubernetes API?
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
KubernetesSecretsare designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, thedefaultprotection applied to Secret values in the Kubernetes API isbase64 encoding, not encryption. That is whyDis correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively storedunencryptedin etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct ifencryption at restis explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain text”) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection—base64 encodingis the right answer.
=========
What is a cloud native application?
It is a monolithic application that has been containerized and is running now on the cloud.
It is an application designed to be scalable and take advantage of services running on the cloud.
It is an application designed to run all its functions in separate containers.
It is any application that runs in a cloud provider and uses its services.
Bis correct. A cloud native application is designed to bescalable, resilient, and adaptable, and toleverage cloud/platform capabilitiesrather than merely being “hosted” on a cloud VM. Cloud-native design emphasizes principles like elasticity (scale up/down), automation, fault tolerance, and rapid, reliable delivery. While containers and Kubernetes are common enablers, the key is the architectural intent: build applications that embrace distributed systems patterns and cloud-managed primitives.
Option A is not enough. Simply containerizing a monolith and running it in the cloud does not automatically make it cloud native; that may be “lift-and-shift” packaging. The application might still be tightly coupled, hard to scale, and operationally fragile. Option C is too narrow and prescriptive; cloud native does not require “all functions in separate containers” (microservices are common but not mandatory). Many cloud-native apps use a mix of services, and even monoliths can be made more cloud native by adopting statelessness, externalized state, and automated delivery. Option D is too broad; “any app running in a cloud provider” includes legacy apps that don’t benefit from elasticity or cloud-native operational models.
Cloud-native applications typically align with patterns: stateless service tiers, declarative configuration, health endpoints, horizontal scaling, graceful shutdown, and reliance on managed backing services (databases, queues, identity, observability). They are built to run reliably in dynamic environments where instances are replaced routinely—an assumption that matches Kubernetes’ reconciliation and self-healing model.
So, the best verified definition among these options isB.
=========
Which of the following are tasks performed by a container orchestration tool?
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
A container orchestration tool (like Kubernetes) is responsible forscheduling,scaling, andhealth managementof workloads, makingAcorrect. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
Which statement about the Kubernetes network model is correct?
Pods can only communicate with Pods exposed via a Service.
Pods can communicate with all Pods without NAT.
The Pod IP is only visible inside a Pod.
The Service IP is used for the communication between Services.
Kubernetes’ networking model assumes thatevery Pod has its own IP addressand that Pods can communicate with other Podsacross nodes without requiring network address translation (NAT). That makesBcorrect. This is one of Kubernetes’ core design assumptions and is typically implemented via CNI plugins that provide flat, routable Pod networking (or equivalent behavior using encapsulation/routing).
This model matters because scheduling is dynamic. The scheduler can place Pods anywhere in the cluster, and applications should not need to know whether a peer is on the same node or a different node. With the Kubernetes network model, Pod-to-Pod communication works uniformly: a Pod can reach any other Pod IP directly, and nodes can reach Pods as well. Services and DNS add stable naming and load balancing, but direct Pod connectivity is part of the baseline model.
Option A is incorrect because Pods can communicate directly using Pod IPs even without Services (subject to NetworkPolicies and routing). Services are abstractions for stable access and load balancing; they are not the only way Pods can communicate. Option C is incorrect because Pod IPs are not limited to visibility “inside a Pod”; they are routable within the cluster network. Option D is misleading: Services are often used by Pods (clients) to reach a set of Pods (backends). “Service IP used for communication between Services” is not the fundamental model; Services are virtual IPs for reaching workloads, and “Service-to-Service communication” usually means one workload calling another via the target Service name.
A useful way to remember the official model:(1) all Pods can communicate with all other Pods (no NAT), (2) all nodes can communicate with all Pods (no NAT), (3) Pod IPs are unique cluster-wide.This enables consistent microservice connectivity and supports higher-level traffic management layers like Ingress and service meshes.
=========
Which of these commands is used to retrieve the documentation and field definitions for a Kubernetes resource?
kubectl explain
kubectl api-resources
kubectl get --help
kubectl show
kubectl explain is the command that showsdocumentation and field definitionsfor Kubernetes resource schemas, soAis correct. Kubernetes resources have a structured schema: top-level fields like apiVersion, kind, and metadata, and resource-specific structures like spec and status. kubectl explain lets you explore these structures directly from your cluster’s API discovery information, including field types, descriptions, and nested fields.
For example, kubectl explain deployment describes the Deployment resource, and kubectl explain deployment.spec dives into the spec structure. You can continue deeper, such as kubectl explain deployment.spec.template.spec.containers to discover container fields. This is especially useful when writing or troubleshooting manifests, because it reduces guesswork and prevents invalid YAML fields that would be rejected by the API server. It also helps when APIs evolve: you can confirm which fields exist in your cluster’s current version and what they mean.
The other commands do different things. kubectl api-resources lists resourcetypesand their shortnames, whether they are namespaced, and supported verbs—useful discovery, but not detailed field definitions. kubectl get --help shows CLI usage help for kubectl get, not the Kubernetes object schema. kubectl show is not a standard kubectl subcommand.
From a Kubernetes “declarative configuration” perspective, correct manifests are critical: controllers reconcile desired state from spec, and subtle field mistakes can change runtime behavior. kubectl explain is a built-in way to learn the schema and write manifests that align with the Kubernetes API’s expectations. That’s why it’s commonly recommended in Kubernetes documentation and troubleshooting workflows.
=========
What is Flux constructed with?
GitLab Environment Toolkit
GitOps Toolkit
Helm Toolkit
GitHub Actions Toolkit
The correct answer isB: GitOps Toolkit. Flux is a GitOps solution for Kubernetes, and in Flux v2 the project is built as a set of Kubernetes controllers and supporting components collectively referred to as theGitOps Toolkit. This toolkit provides the building blocks for implementing GitOps reconciliation: sourcing artifacts (Git repositories, Helm repositories, OCI artifacts), applying manifests (Kustomize/Helm), and continuously reconciling cluster state to match the desired state declared in Git.
This construction matters because it reflects Flux’s modular architecture. Instead of being a single monolithic daemon, Flux is composed of controllers that each handle a part of the GitOps workflow: fetching sources, rendering configuration, and applying changes. This makes it more Kubernetes-native: everything is declarative, runs in the cluster, and can be managed like other workloads (RBAC, namespaces, upgrades, observability).
Why the other options are wrong:
“GitLab Environment Toolkit” and “GitHub Actions Toolkit” are not what Flux is built from. Flux can integrate with many SCM providers and CI systems, but it is not “constructed with” those.
“Helm Toolkit” is not the named foundational set Flux is built upon. Flux can deploy Helm charts, but that’s a capability, not its underlying construction.
In cloud-native delivery, Flux implements the key GitOps control loop: detect changes in Git (or other declared sources), compute desired Kubernetes state, and apply it while continuously checking for drift. The GitOps Toolkit is the set of controllers enabling that loop.
Therefore, the verified correct answer isB.
=========
What is the main purpose of a DaemonSet?
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
The correct answer isA. ADaemonSetis a workload controller whose job is to ensure that a specific Pod runs onall nodes(or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certainreplica countregardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every node”), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes” (via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running” (B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—optionA.
=========
Kubernetes ___ protect you against voluntary interruptions (such as deleting Pods, draining nodes) to run applications in a highly available manner.
Pod Topology Spread Constraints
Pod Disruption Budgets
Taints and Tolerations
Resource Limits and Requests
The correct answer isB: Pod Disruption Budgets (PDBs). A PDB is a policy object that limits how many Pods of an application can be voluntarily disrupted at the same time. “Voluntary disruptions” include actions such as draining a node for maintenance (kubectl drain), cluster upgrades, or an administrator deleting Pods. The core purpose is to preserveavailabilityby ensuring that a minimum number (or percentage) of replicas remain running and ready while those planned disruptions occur.
A PDB is typically defined with either minAvailable (e.g., “at least 3 Pods must remain available”) or maxUnavailable (e.g., “no more than 1 Pod can be unavailable”). Kubernetes uses this budget when performing eviction operations. If evicting a Pod would violate the PDB, the eviction is blocked (or delayed), which forces maintenance workflows to proceed more safely—either by draining more slowly, scaling up first, or scheduling maintenance in stages.
Why the other options are not correct: topology spread constraints (A) influencescheduling distributionacross failure domains but don’t directly protect against voluntary disruptions. Taints and tolerations (C) controlwherePods can schedule, not how many can be disrupted. Resource requests/limits (D) controlCPU/memory allocationand do not guard availability during drains or deletions.
PDBs also work best when paired with Deployments/StatefulSets that maintain replicas and with readiness probes that accurately represent whether a Pod can serve traffic. PDBs do not preventinvoluntarydisruptions (node crashes), but they materially reduce risk during planned operations—exactly what the question is targeting.
=========
Which of the following is a definition of Hybrid Cloud?
A combination of services running in public and private data centers, only including data centers from the same cloud provider.
A cloud native architecture that uses services running in public clouds, excluding data centers in different availability zones.
A cloud native architecture that uses services running in different public and private clouds, including on-premises data centers.
A combination of services running in public and private data centers, excluding serverless functions.
Ahybrid cloudarchitecture combinespublic cloudandprivate/on-premisesenvironments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations. OptionCcaptures the commonly accepted definition: services run acrosspublic and private clouds, includingon-premises data centers, soCis correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience. Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination ofinfrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices isC: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
=========
Which is the correct kubectl command to display logs in real time?
kubectl logs -p test-container-1
kubectl logs -c test-container-1
kubectl logs -l test-container-1
kubectl logs -f test-container-1
To stream logs in real time with kubectl, you use thefollowoption -f, soDis correct. In Kubernetes, kubectl logs retrieves logs from containers in a Pod. By default, it returns the current log output and exits. When you add -f, kubectl keeps the connection open and continuously prints new log lines as they are produced, similar to tail -f on Linux. This is especially useful for debugging live behavior, watching startup sequences, or monitoring an application during a rollout.
The other flags serve different purposes. -p (as seen in option A) requests logs from theprevious instanceof a container (useful after a restart/crash), not real-time streaming. -c (option B) selects a specific container within a multi-container Pod; it doesn’t stream by itself (though it can be combined with -f). -l (option C) is used with kubectl logs to select Pods by label, but again it is not the streaming flag; streaming requires -f.
In real troubleshooting, you commonly combine flags, e.g. kubectl logs -f pod-name -c container-name for streaming logs from a specific container, or kubectl logs -f -l app=myapp to stream from Pods matching a label selector (depending on kubectl behavior/version). But the key answer to “display logs in real time” is thefollow flag: -f.
Therefore, the correct selection isD.
What is the goal of load balancing?
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
The core goal ofload balancingis todistribute incoming requests across multiple instancesof a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches optionD, which is the correct answer.
In Kubernetes, load balancing commonly appears through theServiceabstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same:spread request traffic across multiple service instancesto improve performance and availability.
=========
Which Kubernetes-native deployment strategy supports zero-downtime updates of a workload?
Canary
Recreate
BlueGreen
RollingUpdate
D (RollingUpdate)is correct. In Kubernetes, the Deployment resource’s default update strategy isRollingUpdate, which replaces Podsgraduallyrather than all at once. This supports zero-downtime updates when the workload is properly configured (sufficient replicas, correct readiness probes, and appropriate maxUnavailable / maxSurge settings). As new Pods come up and become Ready, old Pods are terminated in a controlled way, keeping the service available throughout the rollout.
RollingUpdate’s “zero downtime” is achieved by maintaining capacity while transitioning between versions. For example, with multiple replicas, Kubernetes can create new Pods, wait for readiness, then scale down old Pods, ensuring traffic continues to flow to healthy instances. Readiness probes are critical: they prevent traffic from being routed to a Pod until it’s actually ready to serve.
Why other options are not the Kubernetes-native “strategy” answer here:
Recreate (B)explicitly stops old Pods before starting new ones, causing downtime for most services.
Canary (A)andBlueGreen (C)are real deployment patterns, but in “Kubernetes-native deployment strategy” terms, the built-in Deployment strategies areRollingUpdateandRecreate. Canary/BlueGreen typically require additional tooling/controllers (service mesh, ingress controller features, or progressive delivery operators) to manage traffic shifting between versions.
So, for a Kubernetes-native strategy that supports zero-downtime updates, the correct and verified choice isRollingUpdate (D).
=========
What is the Kubernetes object used for running a recurring workload?
Job
Batch
DaemonSet
CronJob
A recurring workload in Kubernetes is implemented with aCronJob, so the correct choice isD. A CronJob is a controller that createsJobson a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
AJob(option A) is run-to-completion but is typically aone-timeexecution; it ensures that a specified number of Pods successfully terminate. Youcanuse a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload isCronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
In a cloud native world, what does the IaC abbreviation stand for?
Infrastructure and Code
Infrastructure as Code
Infrastructure above Code
Infrastructure across Code
IaCstands forInfrastructure as Code, which is optionB. In cloud native environments, IaC is a core operational practice: infrastructure (networks, clusters, load balancers, IAM roles, storage classes, DNS records, and more) is defined using code-like, declarative configuration rather than manual, click-driven changes. This approach mirrors Kubernetes’ own declarative model—where you define desired state in manifests and controllers reconcile the cluster to match.
IaC improves reliability and velocity because it makes infrastructurerepeatable, version-controlled, reviewable, and testable. Teams can store infrastructure definitions in Git, use pull requests for change review, and run automated checks to validate formatting, policies, and safety constraints. If an environment must be recreated (disaster recovery, test environments, regional expansion), IaC enables consistent reproduction with fewer human errors.
In Kubernetes-centric workflows, IaC often covers both the base platform and the workloads layered on top. For example, provisioning might include the Kubernetes control plane, node pools, networking, and identity integration, while Kubernetes manifests (or Helm/Kustomize) define Deployments, Services, RBAC, Ingress, and storage resources. GitOps extends this further by continuously reconciling cluster configuration from a Git source of truth.
The incorrect options (Infrastructure and Code / above / across) are not standard terms. The key idea is “infrastructure treated like software”: changes are made through code commits, go through CI checks, and are rolled out in controlled ways. This aligns with cloud native goals: faster iteration, safer operations, and easier auditing. In short, IaC is the operational backbone that makes Kubernetes and cloud platforms manageable at scale, enabling consistent environments and reducing configuration drift.
You’re right — my previous 16–30 were not taken from your PDF. Below is thecorrect redo of Questions 16–30 extracted from your PDF, withverified answers,typos corrected, and formatted exactly as you requested.
Which kubectl command is useful for collecting information about any type of resource that is active in a Kubernetes cluster?
describe
list
expose
explain
The correct answer isA (describe), used as kubectl describe
kubectl get (not listed) is typically used for listing objects and their summary fields, but kubectl describe goes deeper: for a Pod it will show container images, resource requests/limits, probes, mounted volumes, node assignment, IPs, conditions, and recent scheduling/pulling/starting events. For a Node it shows capacity/allocatable resources, labels/taints, conditions, and node events. Those event details often explain why something is Pending, failing to pull images, failing readiness checks, or being evicted.
Option B (“list”) is not a standard kubectl subcommand for retrieving resource information (you would use get for listing). Option C (expose) is for creating a Service to expose a resource (like a Deployment). Option D (explain) is for viewing API schema/field documentation (e.g., kubectl explain deployment.spec.replicas) and does not report what is currently happening in the cluster.
So, for gathering detailed live diagnostics about a resource in the cluster, the best kubectl command iskubectl describe, which corresponds to optionA.
=========
What is the purpose of the kubelet component within a Kubernetes cluster?
A dashboard for Kubernetes clusters that allows management and troubleshooting of applications.
A network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
A component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
Thekubeletis the primary node agent in Kubernetes. It runs on every worker node (and often on control-plane nodes too if they run workloads) and is responsible for ensuring that containers described byPodSpecsare actually running and healthy on that node. The kubelet continuously watches the Kubernetes API (via the control plane) for Pods that have been scheduled to its node, then it collaborates with the node’s container runtime (through CRI) to pull images, create containers, start them, and manage their lifecycle. It also mounts volumes, configures the Pod’s networking (working with the CNI plugin), and reports Pod and node status back to the API server.
Option D captures the core: “an agent on each node that makes sure containers are running in a Pod.” That includes executing probes (liveness, readiness, startup), restarting containers based on the Pod’s restartPolicy, and enforcing resource constraints in coordination with the runtime and OS.
Why the other options are wrong: A describes the Kubernetes Dashboard (or similar UI tools), not kubelet. B describeskube-proxy, which programs node-level networking rules (iptables/ipvs/eBPF depending on implementation) to implement Service virtual IP behavior. C describes thekube-scheduler, which selects a node for Pods that do not yet have an assigned node.
A useful way to remember kubelet’s role is:scheduler decides where,kubelet makes it happen there. Once the scheduler binds a Pod to a node, kubelet becomes responsible for reconciling “desired state” (PodSpec) with “observed state” (running containers). If a container crashes, kubelet will restart it according to policy; if an image is missing, it will pull it; if a Pod is deleted, it will stop containers and clean up. This node-local reconciliation loop is fundamental to Kubernetes’ self-healing and declarative operation model.
=========
What is a sidecar container?
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
Asidecar containeris an additional container that runs alongside the main application containerwithin the same Pod, sharing network and storage context. That matches optionC, soCis correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains:a helper container in the same Pod.
=========
What service account does a Pod use in a given namespace when the service account is not specified?
admin
sysadmin
root
default
D (default)is correct. In Kubernetes, if you create a Pod (or a controller creates Pods) without specifying spec.serviceAccountName, Kubernetes assigns the Pod thedefault ServiceAccountin that namespace. The ServiceAccount determines what identity the Pod uses when accessing the Kubernetes API (for example, via the in-cluster token mounted into the Pod, when token automounting is enabled).
Every namespace typically has a default ServiceAccount created automatically. The permissions associated with that ServiceAccount are determined by RBAC bindings. In many clusters, the default ServiceAccount has minimal permissions (or none) as a security best practice, because leaving it overly privileged would allow any Pod to access sensitive cluster APIs.
Why the other options are wrong: Kubernetes does not automatically choose “admin,” “sysadmin,” or “root” service accounts. Those are not standard implicit identities, and automatically granting admin privileges would be insecure. Instead, Kubernetes follows a predictable, least-privilege-friendly default: use the namespace’s default ServiceAccount unless you explicitly request a different one.
Operationally, this matters for security and troubleshooting. If an application in a Pod is failing with “forbidden” errors when calling the API, it often means it’s using the default ServiceAccount without the necessary RBAC permissions. The correct fix is usually to create a dedicated ServiceAccount and bind only the required roles, then set serviceAccountName in the Pod template. Conversely, if you’re hardening a cluster, you often disable automounting of service account tokens for Pods that don’t need API access.
Therefore, the verified correct answer isD: default.
=========
Which statement about Ingress is correct?
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Ingress is the Kubernetes API resource for definingexternal HTTP/HTTPS routing into the cluster, soDis correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress isnota Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on anIngress Controllerto actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routingincoming application trafficfrom outside the cluster to internal Services.
So the verified correct statement isD: Ingress exposes routes from outside the cluster to Services in the cluster.
How many different Kubernetes service types can you define?
2
3
4
5
Kubernetes definesfourprimary Service types, which is whyC (4)is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controlshowa stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints controlwheretraffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer isC (4).
=========
What does CNCF stand for?
Cloud Native Community Foundation
Cloud Native Computing Foundation
Cloud Neutral Computing Foundation
Cloud Neutral Community Foundation
CNCFstands for theCloud Native Computing Foundation, makingBcorrect. CNCF is the foundation that hosts and sustains many cloud-native open source projects, including Kubernetes, and provides governance, neutral stewardship, and community infrastructure to help projects grow and remain vendor-neutral.
CNCF’s scope includes not only Kubernetes but also a broad ecosystem of projects across observability, networking, service meshes, runtime security, CI/CD, and application delivery. The foundation defines processes for project incubation and graduation, promotes best practices, organizes community events, and supports interoperability and adoption through reference architectures and education.
In the Kubernetes context, CNCF’s role matters because Kubernetes is a massive multi-vendor project. Neutral governance reduces the risk that any single company can unilaterally control direction. This fosters broad contribution and adoption across cloud providers and enterprises. CNCF also supports the broader “cloud native” definition, often associated with containerization, microservices, declarative APIs, automation, and resilience principles.
The incorrect options are close-sounding but not accurate expansions. “Cloud Native Community Foundation” and the “Cloud Neutral …” variants are not the recognized meaning. The correct official name isCloud Native Computing Foundation.
So, the verified answer isB, and understanding CNCF helps connect Kubernetes to its broader ecosystem of standardized, interoperable cloud-native tooling.
=========
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
kubeadm
kubelet
kube-apiserver
kubectl
The correct answer isB: kubelet. TheContainer Runtime Interface (CRI)defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is thekubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state—pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such ascontainerdorCRI-O.
Why not the others:
kubeadmbootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserveris the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectlis just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in ContainerCreating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is thekubelet—optionB.
=========
Which of the following is a correct definition of a Helm chart?
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is best described as apackagefor Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—soDis correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition isD: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
Kubernetes ___ allows you to automatically manage the number of nodes in your cluster to meet demand.
Node Autoscaler
Cluster Autoscaler
Horizontal Pod Autoscaler
Vertical Pod Autoscaler
Kubernetes supports multiple autoscaling mechanisms, but they operate at different layers. The question asks specifically about automatically managing thenumber of nodesin the cluster, which is the role of theCluster Autoscaler—thereforeBis correct.
Cluster Autoscaler monitors the scheduling state of the cluster. When Pods are pending because there are not enough resources (CPU/memory) available on existing nodes—meaning the scheduler cannot place them—Cluster Autoscaler can request that the underlying infrastructure (typically a cloud provider node group / autoscaling group) add nodes. Conversely, when nodes are underutilized and Pods can be rescheduled elsewhere, Cluster Autoscaler can drain those nodes (respecting disruption constraints like PodDisruptionBudgets) and then remove them to reduce cost. This aligns with cloud-native elasticity: scale infrastructure up and down automatically based on workload needs.
The other options are different:Horizontal Pod Autoscaler (HPA)changes the number of Pod replicas for a workload (like a Deployment) based on metrics (CPU utilization, memory, or custom metrics). It scales the application layer, not the node layer.Vertical Pod Autoscaler (VPA)changes resource requests/limits (CPU/memory) for Pods, effectively “scaling up/down” the size of individual Pods. It also does not directly change node count, though its adjustments can influence scheduling pressure. “Node Autoscaler” is not the canonical Kubernetes component name used in standard terminology; the widely referenced upstream component for node count is Cluster Autoscaler.
In real systems, these autoscalers often work together: HPA increases replicas when traffic rises; that may cause Pods to go Pending if nodes are full; Cluster Autoscaler then adds nodes; scheduling proceeds; later, traffic drops, HPA reduces replicas and Cluster Autoscaler removes nodes. This layered approach provides both performance and cost efficiency.
=========
What native runtime is Open Container Initiative (OCI) compliant?
runC
runV
kata-containers
gvisor
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime).runCis the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are:Kata Containersuses lightweight VMs to provide stronger isolation while still presenting a container-like workflow;gVisorprovides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula isrunC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here isA (runC)because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
=========
In the DevOps framework and culture, who builds, automates, and offers continuous delivery tools for developer teams?
Application Users
Application Developers
Platform Engineers
Cluster Operators
The correct answer isC (Platform Engineers). In modern DevOps and platform operating models, platform engineering teams build and maintain theshared delivery capabilitiesthat product/application teams use to ship software safely and quickly. This includes CI/CD pipeline templates, standardized build and test automation, artifact management (registries), deployment tooling (Helm/Kustomize/GitOps), secrets management patterns, policy guardrails, and paved-road workflows that reduce cognitive load for developers.
While application developers (B) write the application code and often contribute pipeline steps for their service, the “build, automate, and offer tooling for developer teams” responsibility maps directly to platform engineering: they provide the internal platform that turns Kubernetes and cloud services into a consumable product. This is especially common in Kubernetes-based organizations where you want consistent deployment standards, repeatable security checks, and uniform observability.
Cluster operators (D) typically focus on the health and lifecycle of the Kubernetes clusters themselves: upgrades, node pools, networking, storage, cluster security posture, and control plane reliability. They may work closely with platform engineers, but “continuous delivery tools for developer teams” is broader than cluster operations. Application users (A) are consumers of the software, not builders of delivery tooling.
In cloud-native application delivery, this division of labor is important: platform engineers enablehigher velocity with safetyby automating the software supply chain—builds, tests, scans, deploys, progressive delivery, and rollback. Kubernetes provides the runtime substrate, but the platform team makes it easy and safe for developers to use it repeatedly and consistently across many services.
Therefore,Platform Engineers (C)is the verified correct choice.
=========
Which of the following options include only mandatory fields to create a Kubernetes object using a YAML file?
apiVersion, template, kind, status
apiVersion, metadata, status, spec
apiVersion, template, kind, spec
apiVersion, metadata, kind, spec
Dis correct: the mandatory top-level fields for creating a Kubernetes object manifest areapiVersion,kind,metadata, and (for most objects you create)spec. These fields establish what the object is and what you want Kubernetes to do with it.
apiVersiontells Kubernetes which API group/version schema to use (e.g., apps/v1, v1). This determines valid fields and behavior.
kindidentifies the resource type (e.g., Pod, Deployment, Service).
metadatacontains identifying information like name, namespace, and labels/annotations used for organization, selection, and automation.
specdescribes the desired state. Controllers and the kubelet reconcile actual state to match spec.
Why other choices are wrong:
statusis not a mandatory input field. It’s generally written by Kubernetes controllers and reflects observed state (conditions, readiness, assigned node, etc.). Users typically do not set status when creating objects.
templateis not a universal top-level field. It exists inside some resources (notably Deployment.spec.template), but it’s not a required top-level field across Kubernetes objects.
It’s true that some resources can be created without a spec (or with minimal fields), but in the exam-style framing—“mandatory fields… using a YAML file”—the canonical expected set is exactly the four inD. This aligns with how Kubernetes documentation and examples present manifests: identify the API schema and kind, give object metadata, and declare desired state.
Therefore,apiVersion + metadata + kind + specis the only option that includes only the mandatory fields, makingDthe verified correct answer.
=========
If a Pod was waiting for container images to download on the scheduled node, what state would it be in?
Failed
Succeeded
Unknown
Pending
If a Pod is waiting for its container images to be pulled to the node, it remains in thePendingphase, soDis correct. Kubernetes Pod “phase” is a high-level summary of where the Pod is in its lifecycle.Pendingmeans the Pod has been accepted by the cluster but one or more of its containers has not started yet. That can occur because the Pod is waiting to be scheduled, waiting on volume attachment/mount, or—very commonly—waiting for the container runtime to pull the image.
When image pulling is the blocker, kubectl describe pod
Why the other phases don’t apply:
Succeededis for run-to-completion Pods that have finished successfully (typical for Jobs).
Failedmeans the Pod terminated and at least one container terminated in failure (and won’t be restarted, depending on restartPolicy).
Unknownis used when the node can’t be contacted and the Pod’s state can’t be reliably determined (rare in healthy clusters).
A subtle but important Kubernetes detail: status “Waiting” reasons like ImagePullBackOff are container states inside .status.containerStatuses, while the Pod phase can still be Pending. So, “waiting for images to download” maps to Pod Pending, with container waiting reasons providing the deeper diagnosis.
Therefore, the verified correct answer isD: Pending.
=========
What is the resource type used to package sets of containers for scheduling in a cluster?
Pod
ContainerSet
ReplicaSet
Deployment
The Kubernetes resource used to package one or more containers into a schedulable unit is thePod, soAis correct. Kubernetes schedulesPodsonto nodes; it does not schedule individual containers. A Pod represents a single “instance” of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.” For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource. Option C (ReplicaSet) manages asetof Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself. Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit” is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
=========
What are the most important resources to guarantee the performance of an etcd cluster?
CPU and disk capacity.
Network throughput and disk I/O.
CPU and RAM memory.
Network throughput and CPU.
etcdis the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance aredisk I/O(especially latency) andnetwork throughput/latencybetween etcd members and API servers—soBis correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness—henceB.
=========
How is application data maintained in containers?
Store data into data folders.
Store data in separate folders.
Store data into sidecar containers.
Store data into volumes.
Container filesystems areephemeral: the writable layer is tied to the container lifecycle and can be lost when containers are recreated. Therefore, maintaining application data correctly means storing it involumes, makingDthe correct answer. In Kubernetes, volumes provide durable or shareable storage that is mounted into containers at specific paths. Depending on the volume type, the data can persist across container restarts and even Pod rescheduling.
Kubernetes supports many volume patterns. For transient scratch data you might use emptyDir (ephemeral for the Pod’s lifetime). For durable state, you typically usePersistentVolumesconsumed byPersistentVolumeClaims(PVCs), backed by storage systems via CSI drivers (cloud disks, SAN/NAS, distributed storage). This decouples the application container image from its state and enables rolling updates, rescheduling, and scaling without losing data.
Options A and B (“folders”) are incomplete because folders inside the container filesystem do not guarantee persistence. A folder is only as durable as the underlying storage; without a mounted volume, it lives in the container’s writable layer and will disappear when the container is replaced. Option C is incorrect because “sidecar containers” are not a data durability mechanism; sidecars can help ship logs or sync data, but persistent data should still be stored on volumes (or external services like managed databases).
From an application delivery standpoint, the principle is:containers should be immutable and disposable, and state should be externalized. Volumes (and external managed services) make this possible. In Kubernetes, this is a foundational pattern enabling safe rollouts, self-healing, and portability: the platform can kill and recreate Pods freely because data is maintained independently via volumes.
Therefore, the verified correct choice isD: Store data into volumes.
=========
Copyright © 2014-2025 Examstrust. All Rights Reserved