<Loggers>
...
<AsyncLogger name="com.mulesoft.connectors.kafka" level="DEBUG"/>
...
</Loggers>
Troubleshooting Apache Kafka Connector 4.7 - Mule 4
To troubleshoot Anypoint Connector for Apache Kafka (Apache Kafka Connector), become familiar with the information about interpreting commonly thrown exception messages.
Enable SSL Logging
Enable SSL logs to investigate an issue related to TLS communications.
Set your SSL log level depending on your required level of detail:
-
Enable only SSL log details:
javax.net.debug=ssl
-
Enable only handshaking details. (The handshake protocol is a series of messages exchanged over the record protocol.)
javax.net.debug=ssl:handshake
-
Enable handshaking details and SSL details at the same time:
javax.net.debug=ssl, handshake
-
Enable the dumping of all details and traffic data:
javax.net.debug=all
This option is very verbose and under normal circumstances is not necessary.
SSL logging results in a performance impact for HTTPS or other TLS connections. Enable SSL logging only to troubleshoot a specific issue and do not enable it for extended periods. SSL logging produces a significant amount of log messages and can inundate your log file if it is left enabled and unattended. |
To enable SSL logs:
Set the debug parameter as an argument in the runtime configuration for the application.
-
In Studio, right-click on the project and select Run > Run Configurations.
-
Go to the Arguments tab and add
-M-Djavax.net.debug=ssl
in the VM arguments section.
Enable Verbose Exception Logging
Enable verbose exception logs to show a complete stack trace of the error instead of the default truncated output:
-
In Studio, right-click on the project and select Run > Run Configurations.
-
Go to the Arguments tab and append the arguments in the VM arguments section to add the
mule.verbose.exceptions
property. For example:-XX:PermSize=128M -XX:MaxPermSize=256M -Dmule.verbose.exceptions=true
Enable Verbose Logging
To get a better understanding of why an application’s interaction with Kafka Connector is failing, temporarily enable verbose logging for the connector.
Remember to always disable enhanced verbosity after troubleshooting, because it can affect your Mule application’s performance.
To enable verbose logging in the configuration file:
-
Access Anypoint Studio and navigate to the Package Explorer view.
-
Open your application’s project name.
-
Open the
src/main/resources
folder. -
Open the
log4j2.xml
file inside the folder. -
Add an
<AsyncLogger>
tag inside the<Loggers>
tag: -
Save your application changes.
-
Click the project name in Package Explorer and then click Run > Run As > Mule Application.
Troubleshoot Common Errors
Here is a list of the common Apache Kafka Connector errors and how to resolve them:
Failed to Load SSL Keystore
When the Kafka Connector configurations fail to load the required truststore files from the classpath, you receive the following error:
org.mule.runtime.api.connection.ConnectionException: Failed to load SSL keystore
The Kafka Connector configurations bundled with a dependency fail to establish connections even when the required truststore files are available in the classpath.
To resolve this error:
Add the required truststore files to the main application that uses the Kafka Connector configurations.
Troubleshooting clientDNSLookUp Property
Since Kafka Connector version 3.0.0, the default value for the DNS Lookup field was removed to keep the connector backwards compatible. Kafka Connector keeps the same values as the previous version, but now each value corresponds to the following properties:
Value | Property |
---|---|
DEFAULT |
use_all_dns_ips |
USE_ALL_DNS_IPS |
use_all_dns_ips |
RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY |
resolve_canonical_bootstrap_servers_only |
Troubleshooting Publishing Null Values
When the Publish operation receives null as input, the operation publishes an empty byte array.
To avoid this issue, the connector enables you to set the mule.kafka.publish.useNull
boolean system property.
-
When set to
true
, the connector publishes a message with null as a value. -
When set to
false
or not set at all, the connector publishes a message with an empty byte array value.
For further information on how to configure properties, refer to the Use system properties
Understand Commonly Thrown Exceptions
Here is a list of commonly thrown exception messages and how to interpret them:
-
KAFKA:ALREADY_COMMITTED
A commit fails because the commit is already performed.
-
KAFKA:AUTHENTICATION_ERROR(SECURITY)
An authentication error is thrown.
-
KAFKA:AUTHORIZATION_ERROR(CLIENT_SECURITY)
An authorization error occurred due to the client configuration or permissions.
-
KAFKA:COMMIT_FAILED
A commit fails with an unrecoverable error.
-
KAFKA:CONNECTIVITY
Either a connection could not be established or there is a connectivity error.
-
KAFKA:ILLEGAL_STATE
The consumer is not subscribed to any topics, and no manually assigned partitions are set to consume from.
-
KAFKA:INPUT_TOO_LARGE
The input is too large for the broker.
-
KAFKA:INVALID_ACK_MODE
An acknowledgement (ACK) or negative acknowledgement (NACK) operation is run over a consumer that is not in `MANUAL` mode.
-
KAFKA:INVALID_CONFIGURATION
An invalid configuration value is set for the consumer or the producer.
-
KAFKA:INVALID_CONNECTION
The referenced connection is invalid.
-
KAFKA:INVALID_INPUT
The parameters set for the operation are incorrect.
-
KAFKA:INVALID_OFFSET
The partition number set for the operation is less than `0`, is above the number of partitions, or is an invalid value.
-
KAFKA:INVALID_TOPIC
The topic selected for the operation is invalid, and there is no automatic generation of said topic. This might also imply an invalid character in the topic name.
-
KAFKA:INVALID_TOPIC_PARTITION
The topic partition combination selected for the operation is invalid, and there is no automatic generation of topics. This might also imply an invalid character in the topic name, or a non-existing partition.
-
KAFKA:NO_POLL_MADE
No poll was made.
-
KAFKA:NOT_FOUND
For consume operations, the poll does not bring any records for the configured timeouts.
-
KAFKA:OUT_OF_RANGE
There is no reset policy defined, and the offsets for these partitions are either larger or smaller than the range of offsets the server has for the given partition.
-
KAFKA:PREVIOUS_ASSIGNATION
subscribe() is called previously with topics or pattern without a subsequent call to unsubscribe().
-
KAFKA:PRODUCER_FENCED
Another producer with the same `transactional.id` is active.
-
KAFKA:RETRY_EXHAUSTED
The maximum number of retries for the operation is reached.
-
KAFKA:SESSION_NOT_FOUND
No associated session is found when executing an operation.
-
KAFKA:TIMEOUT
A specific request takes longer than the configuration timeout values.