5.3.x
Upgrading and Migrating MongoDB Connector - Mule 4
Upgrade Anypoint Connector for MongoDB (MongoDB Connector) from version 5.3.x to version 6.x.x.
Changes in This Release
MongoDB 6.x.x contains the following changes:
-
Changed connection types and configuration
-
Changed operation names, metadata, fields, and values
-
Changed input sources
-
Added the ability to specify which MongoDB Java driver to use
-
At runtime, you can overwrite the Write concern acknowledgement and Write concern timeout fields in the global element of the following operations:
-
Find documents
-
Insert document
-
Insert documents
-
Update Documents
-
Count documents operations
-
Changes in Connection Types and Configuration
MongoDB 6.x contains the following changes to connection types and configuration:
-
Removed the Connection String connection type.
There is now a single, unified generic connection that contains the configurable properties that were previously available in the Connection String connection type.
-
Removed the Metadata configuration section.
MongoDB automatically generates the metadata.
Config Advanced Tab
The following optional changes apply to the Advanced tab in the global configuration:
-
Added the following read concern options:
-
Read concern: Enables the control of the consistency and isolation properties of the data read from replica sets and replica set shards.
-
Read preference: Specifies the read preferences for this connection.
-
Maximum staleness seconds: Specifies how stale a secondary MongoDB cluster can be before the client stops using it for read operations.
A value of -1 means that there is no maximum staleness threshold.
-
Read preference tags: Enables you to specify a tag set in the read preference to target those members associated with a tag in the set.
-
-
Added the following write concern options:
-
Write concern acknowledgment: Requests an acknowledgment that the write operation was propagated to the specified number of instances.
-
Write concern timeout: Specifies a time limit for the write concern.
-
Write concern timeout timeunit: Specifies a time unit for the Write concern timeout field.
-
Connection Section
The following changes apply to the tabs in the Connection section of the global configuration:
-
Added the following fields to the General tab:
-
Required Libraries: Reference to the MongoDB Driver.The version must be 3.11 or later.
-
Server Addresses: This field is now is a list of server addresses to use for the connection.
-
-
Added a Security tab for optional TLS context configuration.
-
Added the following optional fields to the Advanced tab:
-
Authentication mechanism: Authentication mechanism used for this connection
-
Replica set name: Name of the replica set to which to connect
-
Authentication source: Database name associated with the user’s credentials
-
Compressors: List of compressors for which to enable network compression for communication between this client and a mongod/mongos instance
-
Zlib compression level: Integer that specifies the compression level if using zlib for network compression.
Set this field to 0 for no compression or from 1 through 9 for compression, with 1 being the lowest compression level and 9 the highest. High compression levels require more processing time than low compression levels.
-
Connection timeout: Connection timeout used when establishing the socket connections
-
Connection timeout time unit: Time unit for the Connection timeout field
-
Local threshold: Size of the latency window for selecting among multiple suitable MongoDB instances
-
Local threshold time unit: Time unit for the Local threshold time unit field
-
Server selection timeout: How long to block for server selection before throwing an exception
-
Server selection timeout time unit: Time unit for the Server selection timeout field
-
Socket timeout: Maximum time to wait for a send or receive operation on a socket before the attempt times out
-
Socket timeout unit: Time unit for the Socket timeout field
-
Min connection pool size: Minimum size for the connection pool
-
Max connection pool size: Maximum size for the connection pool
-
Max wait queue time: Maximum wait queue time for the connection pool
-
Max wait queue time unit: Time unit for the Maximum wait queue time field
-
Max connection life time: Maximum connection lifetime for the connection pool
-
Max connection life time time unit: Time unit for the Maximum connection life time field
-
Max connection idle time: Maximum connection idle time for the connection pool
-
Max connection idle time unit: Time unit for the Maximum connection idle time field
-
Changed Operations, Parameters, and Return Types
The following table shows changes to operation names, input parameters, and return types:
MongoDB 5.x Operation | Changes in MongoDB 6.0 |
---|---|
Count documents |
Input parameters are now:
Return type is now Long with the count result. |
Create Collection |
Input parameters are now:
|
Exists collection |
Operation is now called Collection exists. |
Create file from payload |
|
Create index |
Input parameters are now:
|
Dump |
Input parameters are now:
Return type is now a List<String> type that points to the created files. Each string is a filePath type. |
Execute command |
|
Find documents |
Input parameters are now:
This operation supports pagination. Each item returned as a JSON type. |
Find files |
|
Get file content |
|
Insert document |
Return type is now an entire JSON object with the |
Insert documents |
Return type is now a Bulk Operation Result, which contains a JSON file that lists each created record and its status. |
List collections |
Return type is now a List<String> type that contains the names of the collections. |
List indices |
Operation is now called List indexes. |
Map reduce objects |
|
Remove documents |
|
Remove files |
Input parameter is now |
Update documents |
|
Changed Operations Metadata
MongoDB 5.x generated operations metadata when the user provided a set of documents per collection from which to take the attributes. MongoDB 6.0 generates metadata automatically, based on the latest document in each collection.
Metadata in MongoDB v6.0
Operation Name | Input Metadata | Output Metadata |
---|---|---|
Insert Document |
Document |
Resolved dynamically based on the selected value of the collection parameter. The connector adds the latest document of the given collection and uses the document’s structure as input/output metadata. |
Insert Documents |
Document |
Resolved dynamically based on the selected value of the collection parameter. The connector adds the latest document of the given collection and uses the document’s structure as input/output metadata. |
Update Documents |
Not applicable |
JSON that contains the following structure:
|
Remove Documents |
Query: JSON Object |
Not applicable |
Count Documents |
Query: JSON Object |
Not applicable |
Find Documents |
Query: JSON Object |
Resolved dynamically based on the selected value of the collection parameter. The connector uses the structure of the last document in the given collection. |
Create File |
Not applicable |
A JSON object with the following attributes:
|
Find Files |
Not applicable |
List of JSON objects with the following attributes:
|
Removed Operations
The following operations were removed from the MongoDB connector:
Removed from MongoDB 5.x | Can be reproduced in MongoDB 6.0.0 through | ||
---|---|---|---|
Name |
Description |
Name |
Description |
Incremental dump |
Executes an incremental dump of the database |
Dump |
Use the Dump operation. |
Restore |
Takes the output from the dump and restores it |
Restore from file or Restore from directory |
Restore from file or Restore from directory take the output from the dump file or directory and restore it. |
Update documents by function |
Update documents using a Mongo function |
Not applicable |
Not applicable |
Update documents by functions |
Update documents using one or more Mongo functions |
Not applicable |
Not applicable |
Find one and update document |
Finds and updates the first document that matches a given query |
Not applicable |
This operation’s functionality can be reproduced by combining other operations. For example, you can invoke the Find documents operation with a limit of 1 document. Then you can invoke the Update documents operation using the ID of the document returned previously. |
Save document |
Inserts or updates a document based on its object ID |
Not applicable |
Not applicable |
Find one document |
Finds the first document that matches a given query |
Find documents |
Set the Limit=1 to return one document. |
Execute generic command |
Executes a generic command on the database |
Execute command |
Executes a command on the database |
List files |
Lists all files that match the given query and sorts them by filename |
Not applicable |
Not applicable |
Find one file |
Returns the first file that matches the given query |
Find files |
Specify a specific file to query |
Removed Parameters
The Num To Skip field in the Find Documents operation is removed from MongoDB Connector 6.x. The skip()
method in MongoDB, which is used internally by the Num To Skip field, can lead to performance issues, especially as the offset increases. This is because the server scans through the results from the beginning to the specified offset before returning any documents, which adds unnecessary load and slows down the operation.
To optimize pagination and improve performance when migrating to MongoDB Connector 6.x, the following approaches are recommended:
-
Avoid using
skip()
for large offsets.Instead of using the Num To Skip field, it is more efficient to refine query conditions to exclude unwanted documents directly.
-
Use range queries.
Use existing indexes to perform range queries. For example, if your documents have a timestamp or a unique ID field (indexed), you can use these fields to filter results directly to avoid scanning through unwanted documents.
{ "timestamp": { "$gt": ISODate("2024-01-01T00:00:00Z") } }
-
Apply
limit()
for pagination.Use the
limit()
method in combination with range queries to control the number of documents returned, effectively implementing pagination without the performance issues associated withskip()
.{ "timestamp": { "$gt": ISODate("2024-01-01T00:00:00Z") } } .limit(10)
-
Combine filters and sorting.
Consider ordering documents by an indexed field, which allows for more efficient queries when paginating through results.
Changes in Input Sources
MongoDB 6.0.0 has one input source, Object Listener, which retrieves all of the created documents that belong to a specific collection.
The Delete Sources and Update Sources input sources were removed.
Metadata in Object Listener
Input Metadata | Output Metadata |
---|---|
Not applicable |
The connector resolves output metadata dynamically based on the selected value of the Collection Name parameter. The connector reads the last document of the given collection and uses the document’s structure as output metadata. |
Requirements and Limitations
Software | Version |
---|---|
Mule |
4.1.1 and later |
MongoDB |
MongoDB Java driver 3.11 and later |
Upgrade Prerequisites
Before you perform the upgrade, you must:
-
Create a backup of your files, data, and configuration in case you need to restore to the previous version.
-
Install MongoDB v6.0 to replace the MongoDB operations that were previously included in MongoDB Connector v5.x.
Upgrade Steps
Follow these steps to perform the upgrade from MongoDB Connector v5.3.x to MongoDB Connector v6.0:
-
In Anypoint Studio, create a Mule project.
-
In the Mule Palette view, click Search in Exchange.
-
In the Add Dependencies to Project window, enter
MongoDB
in the search field. -
In the Available modules section, select MongoDB Connector and click Add.
-
Click Finish.
-
Verify that the salesforce-connector dependency version is 6.0.0 in the
pom.xml
file in the Mule project.
Studio upgrades the connector automatically.
Verify the Upgrade
After you install the latest version of the connector, follow these steps to verify the upgrade:
-
In Studio, verify that there are no errors in the Problems or Console views.
-
Check the project
pom.xml
file and verify that there are no problems. -
Test the connection and verify that the operations work.
Troubleshooting
If there are problems with caching the parameters and metadata, try restarting Studio.