Contact Free trial Login

Database Examples

Example: Configure a Database Connection

The Database connector can connect to any database for which a JDBC driver is available. To make it easier to connect to the most popular databases, the connector includes a series of different connection types.

Some of the following examples show configurations for common connection types such as MySQL, Oracle, and Microsoft SQL Server. Others of the following illustrate more advanced use cases, such as connecting to a global data source, configuring a JDBC driver, and configuring connection pooling.

Connect to MySQL

<db:config name="dbConfig">
  <db:my-sql-connection
  host="0.0.0.0"
  port="3306"
  user="Max"
  password="POWERS"
  database="someSchema"/>
</db:config>

Connect to Oracle

<db:config name="dbConfig">
  <db:oracle-connection
  host="192.168.99.100"
  port="1521" instance="xe"
  user="system"
  password="oracle" />
</db :config >

Connect to Microsoft SQL Server

<db:config name="dbConfig">
    <db:mssql-connection
    host="0.0.0.0"
    port="553"
    user="Max"
    password="mypassword"
    databaseName="microsoftDB" />
</db:config>

Connect to a Generic Database

The generic connection type is useful in these cases:

  • When you want to connect to a database for which MuleSoft does not provide a specific connection type.

  • When you want to connect to one of the supported databases, but you are using custom options that not included in the connection types.

A generic connection simply requires:

  • Driver class name

  • Connection URL

The following example shows how to connect to an H2 database:

<db:config name="dbConfig">
    <db:generic-connection driverClassName="org.h2.Driver"
    url="jdbc:h2:file:/tmp/datasenseDB"/>
</db:config>

Connect to a Global DataSource

When you set up a generic connection, the connector uses the information you provide to generate a JDBC DataSource. In some cases, you might want to create your own DataSource. To do this, the database configuration supports a data-source-connection element that allows you to reference a DataSource that is defined through the Spring module.

To connect to a global DataSource:

  1. Create a Spring configuration file containing a Bean such as this one:

    <bean id="jdbcDataSource" class="org.enhydra.jdbc.standard.StandardDataSource"
     destroy-method="shutdown">
       <property name="driverName" value="org.apache.derby.jdbc.EmbeddedDriver"/>
       <property name="url" value="jdbc:derby:muleEmbeddedDB;create=true"/>
    </bean>

    The example above creates a custom enhydra DataSource that connects to a Derby database.

  2. Reference your Spring configuration file using the Spring module, and use the DataSource in a connection:

    <spring:config name="datasource" files="datasource-config.xml" />
    
    <db:config name="dbConfig">
       <db:data-source-connection dataSourceRef="jdbcDataSource" />
    </db:config>

Set the JDBC Driver

Now that you know how to configure the connections, you can supply the JDBC driver. The example below shows how to supply a driver for the MySQL database.

Note that the steps are the same for any database. The only difference is the need for the driver dependency.

  1. Add the driver as a dependency in your project’s pom.xml file:

    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.6</version>
    </dependency>
  2. Through your pom.xml, make Mule Runtime expose the driver dependency to the Database connector, for example:

    <build>
        <plugins>
            <!-- Only used to declare the shared libraries-->
            <plugin>
                <groupId>org.mule.tools.maven</groupId>
                <artifactId>mule-maven-plugin</artifactId>
                <version>1.0.0</version>
                <configuration>
                    <sharedLibraries>
                        <sharedLibrary>
                            <groupId>mysql</groupId>
                            <artifactId>mysql-connector-java</artifactId>
                        </sharedLibrary>
                    </sharedLibraries>
                </configuration>
            </plugin>
        </plugins>
    </build>

Configure Connection Pools

Establishing connections is costly in terms of connector and RDBMS network and processing overhead. The solution is to build a connection pool that provides a number of available connections. You use the <db:pooling-profile> element, rather than the standard Mule <pooling-profile> element, for this purpose.

This example shows how to use the db:pooling-profile element:

<db:config name="dbConfig">
    <db:my-sql-connection database="mozart_test" host="${host}" password="${password}" port="${port}" user="${user}">
        <db:pooling-profile acquireIncrement="1" maxPoolSize="5" maxWait="0" maxWaitUnit="SECONDS" minPoolSize="0" preparedStatementCacheSize="5"/>
    </db:my-sql-connection>
</db:config>

All connection configuration elements, except the global data source reference, accept the pooling profile. In JDBC, pooling occurs at the data source level. To do pooling, you have to configure the global data source. The software cannot add it on the fly.

Notice that unlike other connectors such as FTP or SFTP, the <db:pooling-profile> element is used instead of the standard <pooling-profile>. This is because databases have special pooling attributes, such as preparedStatementCacheSize, which do not make sense on the generic element.

For more details on parameters and capabilities of the connection types, see the Database Connector Technical Reference.

Create an Oracle Database Connection with TNS Names

To create an Oracle connection that relies on the tnsnames.ora file, you can use the Database connector and provide configuration details in the connector configuration. For this to work you have to add the ojdbc6.jar as a dependency in your project build path.

If a firewall is between your Mule instance and the database itself, use the ENABLE=BROKEN parameter to enable a keep-alive at the TNS connection. If not set, the connection may drop traffic due to expiring inactivity timers.

For example, you can configure a Database URL in the Database connector’s General tab in Anypoint Studio, and supply this URL:

jdbc:oracle:thin:${oracle.user}/${oracle.password}@(DESCRIPTION=(ENABLE=BROKEN)(FAILOVER=ON)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=${oracle.host1})(PORT=${oracle.port}))(ADDRESS=(PROTOCOL=TCP)(HOST=${oracle.host2})(PORT=${oracle.port}))(CONNECT_DATA=(SERVICE_NAME=${oracle.service})))

Example: Query a Database

When you configure a Database operation using the Database connector, there are several ways to add variable values to the SQL statement you execute in the database.

Use Input Parameters to Protect Database Queries

The select operation is used to retrieve information from the RDBMS. The primary concept of this operation is to supply a SQL query and use DataWeave to supply the parameters:

<flow name="selectParameterizedQuery">
  <db:select config-ref="dbConfig">
    <db:sql>SELECT * FROM PLANET WHERE name = :name</db:sql>
    <db:input-parameters>
      #[{'name' : payload}]
    </db:input-parameters>
  </db:select>
</flow>

As you can see in the above example, input parameters are supplied as key-value pairs, which you can create by embedding a DataWeave script. Those keys are used in conjunction with the semicolon character (:) to reference a parameter value by name. This is the recommended approach for using parameters in your query.

Since version 1.4.0 (escaping semi-colons): If you need to use the semicolon character (:) in your SQL Query, you can escape it by putting a backslash before it. This is useful, for example, when using PostgeSQL type casting, which requires two semicolons before the type you are casting to, for example: <db:sql>SELECT price\:\:float8 FROM PRODUCT</db:sql>

The alternative is to directly write <db:sql>SELECT * FROM PLANET WHERE name = #[payload] </db:sql>, but this is a very dangerous practice that is not recommended.

Advantages of using input parameters to configure the WHERE clause in a SELECT statement this way:

  • The query becomes immune to SQL injection attacks.

  • The connector can perform optimizations that are not possible otherwise, which improves the app’s overall performance.

DataSense is available for the operation’s input and output. By analyzing the query, the connector automatically calculates the structure of the query’s output by analyzing the projection section of the SQL statement. At the same time, by comparing the conditions in the WHERE clause to the table structure, it also generates DataSense input to help you build the DataWeave script that generates the input parameters.

Dynamic Queries

Sometimes, you not only need to parameterize the WHERE clause, but also parameterize parts of the query itself. Examples of use cases for this are queries that need to hit online versus historic tables that depend on a condition, or complex queries where the project columns need to vary.

In this example, you can see how a full expression is used to produce the query by building a string in which the table depends on a variable $(vars.table). An important thing to notice is that although some of the query text is dynamic "SELECT * FROM $(vars.table), the WHERE clause is still using the best practice of defining the WHERE condition using input parameters, in this case WHERE name = :name:

<set-variable variableName="table" value="PLANET"/>
<db:select config-ref="dbConfig">
    <db:sql>#["SELECT * FROM $(vars.table) WHERE name = :name"]</db:sql>
    <db:input-parameters>
        #[{'name' : payload}]
    </db:input-parameters>
</db:select>
In Mule 3, the concept of select was split in parameterized and dynamic queries, and you couldn’t use both at the same time. You had to choose between having a dynamic query or having the advantages of using parameters (SQL Injection protection, PreparedStatement optimization, and so on). Furthermore, the syntax to do one or the other was different, so you had to learn two different ways of doing the same thing. But with the Database Connector in Mule 4, you can now use both methods at the same time by using expressions in the query.

Dynamic queries are necessary in the above example and cannot be treated like Input parameters; Input parameters can only be applied to parameters in a WHERE clause. To modify any other part of the query, you should use DataWeave’s interpolation operator.

Stream Large Results

Database tables tend to be big. One single query might return tens of thousands of records, especially when dealing with integration use cases. Streaming is a great solution for this. For example, suppose you have a query which returns 10K rows, attempting to fetch all those rows at once results in the following:

  • Performance degradation, since that’s a big pull from the network

  • A risk of running out of memory, since all that information needs to be loaded into RAM.

Streaming means that the connector does not fetch the 10K rows at once; instead, it fetches a smaller chunk, and once that chunk has been consumed, it fetches the rest. That way, you can reduce pressure over the network and memory.

Stream in Mule 4 versus Stream in Mule 3

In Mule 3.x, streaming was something you had to specifically enable because it was disabled by default. In Mule 4, streaming is transparent and always enabled. You don’t have to worry about it anymore. You can simply trust that the feature is there.

Another improvement from Mule 3 is that you can now use the new repeatable streams mechanism from Mule 4. This means that streams are now repeatable, and you can have DataWeave and other components process the same stream many times, even in parallel. For more information on repeatable streams, see Streaming in Mule 4.0.

Limit Results

Mule runtime allows the connector to handle streaming gracefully. However, that does not mean that it’s a good idea to move large chunks of data from the database to Mule. Even with streaming, a typical SQL query can return many rows, each one containing a lot of information. This generates:

  • Large network overhead

  • A large resources footprint on the database server

  • Large memory and requirements in the Mule Runtime

The Select operation provides parameters (fetchSize and maxRows) to help with this:

<db:select fetchSize="200" maxRows="1000" config-ref="dbConfig">
  <db:sql>select * from some_table</db:sql>
</db:select>

This example tells the connector not to fetch more that 1000 total rows as the result of an SQL SELECT query, but to try to only stream results from the database to the Database connector 200 at a time. So in this example, it would require 5 separate network round trips to retrieve 1000 matching rows.

The combination limits the total amount of information that is retrieved (the maxRows value) and guarantees that the data is returned from the database over the network in smaller chunks (the fetchSize value).

The fetchSize is a hint to the JDBC driver and need not always be enforced. The behavior varies between JDBC driver providers. Often, the default fetchSize for a JDBC driver is 10.

Query Timeout Reference

Sometimes database queries take a long time to execute. The following factors often cause delays in query execution:

  • An inefficient query, such as one having improper indexing that iterates over many rows

  • A busy RDBMS or network

  • A lock contention

Generally, it’s recommended to set a timeout on the query. To manage timeouts, configure queryTimeout and queryTimeoutUnit. The following example shows how to set a timeout for the Select operation, but all operations support setting a timeout:

<db:select queryTimeout="0" queryTimeoutUnit="SECONDS" config-ref="dbConfig">
   <db:sql>select * from some_table</db:sql>
</db:select>

Examples: Insert, Update, and Delete Data with the Database Connector

The Database connector supports insert, update, and delete operations. Similar to the select operation, these operations support dynamic queries and parameterization by using embedded DataWeave transformations while also supporting fetchSize, maxRows, and timeout parameters. Although examples of input parameterization, dynamic queries, and parameter usage are provided, you can consult the SELECT operation documentation for more information.

Insert

Suppose there is a database schema named Products that has a table named electronic. electronic contains columns named id, name, description, price, and discount. The following SQL statement creates the table:

CREATE TABLE electronic(
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(100),
    description VARCHAR(255),
    price SMALLINT,
    discount TINYINT
)

For an example of creating a table using the Database connector, check how to Execute DDL Statements.

Given that the table is already created and that a there is a connection configuration named dbConfig, the following operation inserts a new record in the electronic table:

<db:insert config-ref="dbConfig">
  <db:sql>
    INSERT INTO electronic(name, description, price, discount)
    VALUES ('Coffee Machine', 'Model: XYZ99. Uses small size capsules.', 120, 5)
  </db:sql>
</db:insert>

Note that the id parameter is not passed in the query because it is an autoincrement key that is generated automatically by the database.

To benefit from the advantages of input parameters, structure the equivalent query as follows:

<db:insert config-ref="dbConfig">
    <db:sql>
        INSERT INTO electronic(name, description, price, discount)
        VALUES (:name, :description, :price, :discount)
    </db:sql>
    <db:input-parameters>#[{
        name: 'Coffee Machine',
        description: 'Model:XYZ99. Uses small size capsules.',
        price: 120,
        discount: 5}]
    </db:input-parameters>
</db:insert>

The insert operation outputs a Statement Result object that contains two fields: an affectedRows integer that indicates how many rows were affected by the query, and a generatedKeys map that contains the autogenerated keys. Given this example, the output is as follows:

{
  "affectedRows": 1,
  "generatedKeys": {

  }
}

The generatedKeys map is empty, although a row is successfully added to the table. The map is empty because this information is disabled by default to avoid overhead. To activate the information and generate keys, add the autoGenerateKeys parameter, and then set it to true:

<db:insert config-ref="dbConfig" autoGenerateKeys="true">
   <db:sql>
       INSERT INTO electronic(name, description, price, discount)
       VALUES (:name, :description, :price, :discount)
   </db:sql>
   <db:input-parameters>#[{
       name: 'Coffee Machine',
       description: 'Model:XYZ99. Uses small size capsules.',
       price: 120,
       discount: 5}]
   </db:input-parameters>
</db:insert>

If the statement generates multiple keys per affected row, you can specify which columns should be returned by setting the parameter autoGeneratedKeysColumnNames to a list that contains the desired column names. For example, to ensure that only the id column is returned, set the parameter to a DataWeave list with a single string element:

<db:insert config-ref="dbConfig" autoGenerateKeys="true" autoGeneratedKeysColumnNames="#[['id']]">
    ...
</db:insert>

Update

Given the table that is defined in the insert example, the operation to update the electronic table to set the discount to 10 percent for all items that have a price value above 100 is as follows:

<db:update config-ref="dbConfig">
    <db:sql><![CDATA[#["UPDATE electronic SET discount = :discount WHERE price > :price"]]]></db:sql>
	<db:input-parameters>#[{
        discount: 10,
        price: 100
    }]</db:input-parameters>
</db:update>

The <![CDATA[…​]]> wrapper allows you to use special characters, such as > or ", in the query. Otherwise, you must use XML-escaped versions of those characters, such as &gt; and &quot;.

When you use the Anypoint Studio visual user interface for connectors instead of the XML code view, you can type special characters directly into the SQL Query Text box, and Anypoint Studio automatically changes the character to its escaped version in the XML view.

Delete

Given the table defined in the example for Insert, the following operation deletes the record with id 1 from the electronic table:

<db:delete config-ref="dbConfig">
    <db:sql>DELETE FROM electronic WHERE id = :id</db:sql>
    <db:input-parameters>#[{
        id: 1
    }]</db:input-parameters>
</db:delete>

Example: Execute Stored Procedures

Invoke stored procedures that combine, input, output, and input-output parameters.

For example:

<!-- Invoke a procedure with input parameters -->
<db:stored-procedure config-ref="dbConfig">
    <db:sql>{ call updatePlanetDescription('Venus', :description) }</db:sql>
    <db:input-parameters>
        #[{'description' : payload}]
    </db:input-parameters>
</db:stored-procedure>

<!-- Invoke a procedure with input-output parameters -->
<db:stored-procedure config-ref="dbConfig">
  <db:sql>{ call doubleMyInt(:myInt) }</db:sql>
  <db:in-out-parameters>
      <db:in-out-parameter key="myInt" value="3"/>
  </db:in-out-parameters>
</db:stored-procedure>

<!-- Invoke a procedure with both input AND output parameters -->
<db:stored-procedure config-ref="dbConfig">
    <db:sql>{ call multiplyInts(:int1, :int2, :result1, :int3, :result2) }</db:sql>
    <db:input-parameters>
        #[{
            'int1' : 3,
            'int2' : 4,
            'int3' : 5
        }]
    </db:input-parameters>
    <db:output-parameters>
        <db:output-parameter key="result1" type="INTEGER"/>
        <db:output-parameter key="result2" type="INTEGER"/>
        <db:output-parameter key="myInt" type="INTEGER"/>
    </db:output-parameters>
</db:stored-procedure>

Many combinations are possible.

After execution, the resulting values of the output and input-output parameters are available in the operation’s result. Leverage DataSense to help you get them.

For connector version 1.4.0 or later, make sure that no whitespace is between the name of the stored procedure and the first parentheses.

Example: Execute DDL Statements

DDL statements, or Data Definition Language statements, are special types of SQL statements that do not directly access or modify data. Instead, they create, modify, or destroy data structures, stored procedures, and so on. You can also use DDL statements to create or modify data within tables or other data structures.

Create Table Example

Suppose you have a database schema named Products. You also have a connection named dbConfig that is configured to access Products. The following operation, named Execute DDL, creates a new table named electronic that has columns named id, name, description, price, and discount:

<db:execute-ddl config-ref="dbConfig">
    <db:sql>
        CREATE TABLE electronic(
            id INT AUTO_INCREMENT PRIMARY KEY,
            name VARCHAR(100),
            description VARCHAR(255),
            price SMALLINT,
            discount TINYINT
        )
    </db:sql>
</db:execute-ddl>

The Execute DDL operation creates a table and returns 0 unless there is an error.

Example: Execute Bulk Operations

The insert, update, and delete operations can be used for the cases in which each input parameter can take only one value. Alternatively, bulk operations allow you to run a single query using a set of parameters values.

If you decide against using bulk operations, when deleting, many rows could match the criteria and get deleted, but only one criteria (POSITION = X) is provided. The same concept applies for update; if you use UPDATE PRODUCTS set PRICE = PRICE * 0.9 where PRICE > :price, you may be applying a 10% discount on many products, but the price input parameter only accepts one value.

If you want to apply different discount rates on products that have different prices, you can do so by executing many operations. For example, assume you have a payload which is a list of object of the following structure { price : number, discountRate: number}. From that point, you can do the following:

<foreach>
  <db:update config-ref="dbConfig">
    <db:input-parameters>
     #[
      {
        'discountRate' : payload.discountRate,
        'price' : payload.price,
      }
    ]
    </db:input-parameters>
    <db:sql>
      UPDATE PRODUCTS set PRICE = PRICE * :discountRate where PRICE > :price
    </db:sql>
  </db:update>
</foreach>

The previous operation accomplishes the task at hand but is inefficient. For each element in the list, one query needs to be executed. As such, for each element, the operation must:

  • Parse the query

  • Resolve parameters

  • Acquire a connection to the database (either by getting one for the pool or establishing a new one)

  • Pay all the network overhead

  • The RBMS has to process the query and apply changes

  • Release the connection

You can avoid all the above steps by doing a bulk operation. There is only one query in that example; the update statement is constant, not dynamic. The difference is that you supply a different set of parameters on each iteration.

Bulk operations are not a shortcut for the same <foreach> as noted in the previous example; they use features on the JDBC API so that:

  • The query is parsed only once

  • Only one database connection is required since a single statement is executed

  • Network overhead is minimized

  • RBDMS can execute the bulk operation atomically

For these use cases, the connector offers three operations: <db:bulk-insert>, <db:bulk-update> and <db:bulk-delete>.

These operations are similar to their single counterparts, except that instead of receiving input parameters as key-value pairs, the operations expect them as a list of key-value pairs.

For example:

<db:bulk-insert config-ref="dbConfig" >
  <db:bulk-input-parameters>
    #[[{'id': 2, 'name': 'George', 'lastName': 'Costanza'}, {'id': 3, 'name': 'Cosmo', 'lastName': 'Kramer'}]]
  </db:bulk-input-parameters>
  <db:sql>
    insert into customers (id, name, lastName) values (:id, :name, :lastName)
  </db:sql>
</db:bulk-insert>

Example: Database Connector Data Types Reference

The following examples include information about database input parameters and defining custom data types you can use when connected to a particular connection provider.

Force Parameter Types

Although uncommon, at some point you might need to force the underlying JDBC driver to coerce an input parameter to an specific type. For example, if your JDBC driver cannot determine the type of the input, or a parameter is of a custom type, you might need to apply force by specifing the type of each input parameter:

<db:bulk-insert queryTimeout="0" queryTimeoutUnit="SECONDS">
     <db:sql>INSERT INTO PLANET(POSITION, NAME) VALUES (:position, :name)</db:sql>
     <db:parameter-types>
         <db:parameter-type key="name" type="VARCHAR" />
         <db:parameter-type key="position" type="INTEGER" />
     </db:parameter-types>
 </db:bulk-insert>

Define Custom Data Types

The Database Connector Reference describes exclusive and common parameters for each connection provider, such as Derby and Oracle. A child element of the connection provider element defines custom data types you can use when connected to a particular provider. For example:

<db:config name="dbConfig">
   <db:derby-connection url="jdbc:derby:muleEmbeddedDB;create=true">
       <db:column-types>
           <!-- Derby uses JAVA_OBJECT for UDT-->
           <db:column-type typeName="CONTACT_DETAILS" id="2000"/>
       </db:column-types>
   </db:derby-connection>
</db:config>

Example: Database Transactions Reference

You can execute database operations in the context of a transaction. Each operation has a transactionalAction that specifies the type of joining action that operations can take regarding the active transaction if there is one. For example, the Select Operation has the following possible actions:

  • ALWAYS_JOIN: Expects a transaction to be in progress when a message is received. If there is no transaction, an error is raised.

  • JOIN_IF_POSSIBLE: Joins the current transaction if one is available. Otherwise, no transaction occurs.

  • NOT_SUPPORTED: Executes outside any existent transaction.

Group Operations

Sometimes you need to execute several queries atomically in the context of the same transaction. For example, during a bank account transfer, you need to subtract money from one account and add it in another, but if any of the two operations fail, roll back both:

<db:update config-ref=”db”>
 <db:sql>UPDATE ACCOUNT set BALANCE = BALANCE - :money where ID = :source</db:sql>
 <db:input-parameters>#[{‘money’ : payload.money, ‘source’: payload.source}]</db:input-parameters>
</db:update>

<db:update config-ref=”db”>
 <db:sql>UPDATE ACCOUNT set BALANCE = BALANCE + :money where ID = :target</db:sql>
 <db:input-parameters>#[{‘money’ : payload.money, ‘target’’: payload.target}]</db:input-parameters>
</db:update>

If these queries are executed in the context of an already existing transaction, the queries belong to the same transaction. If there’s no active transaction, you can start one by using the <try> scope:

<try transactionalAction="ALWAYS_BEGIN">
 <db:update config-ref="db">
   <db:sql>UPDATE ACCOUNT set BALANCE = BALANCE - :money where ID = :source</db:sql>
   <db:input-parameters>#[{'money' : payload.money, 'source': payload.source}]</db:input-parameters>
 </db:update>

 <db:update config-ref="db">
   <db:sql>UPDATE ACCOUNT set BALANCE = BALANCE + :money where ID = :target</db:sql>
   <db:input-parameters>#[{'money' : payload.money, 'target'': payload.target}]</db:input-parameters>
 </db:update>
</try>

Example: Execute Script Reference

This operation executes a script of random length as a single statement. Execute script differs from other operations in the following ways:

  • The script can contain multiple statements.

  • Statements can be of different types.

  • No input/output parameters are accepted.

The execute-script operation runs any script that does not involve a SQL projection. You can use <db:execute-script> in the following ways:

  • Embed execute-script in an operation.

  • Reference execute-script from a file.

You cannot use both ways of executing a script at the same time.

Executing a script returns an array of integer numbers, one element per each executed statement. Each number represents the number of objects affected by the statement.

Embed in an Operation

<db:execute-script config-ref="dbConfig">
   <db:sql>
       update PLANET set NAME='Mercury' where POSITION=0;
       update PLANET set NAME='Mercury' where POSITION=4
   </db:sql>
</db:execute-script>

Reference From a File

<flow name="executeScriptFromFile">
   <db:execute-script config-ref="dbConfig" file="integration/executescript/bulk-script.sql" />
</flow>

The execute script operation frequently creates schemas and tables, inserts data, and performs data rotation. Data rotation is performed by nightly jobs that move and archive data into historic tables and purge on-line tables.

Execute script and bulk operations are intended for different uses. For example, although you can build a script that inserts many rows into the database, you could not conveniently provide dynamic parameters to the script. Also, you need to ensure that the script protects against a SQL injection attack.

Executing a select statement in a script returns no data and poses a data input problem.

We use cookies to make interactions with our websites and services easy and meaningful, to better understand how they are used and to tailor advertising. You can read more and make your cookie choices here. By continuing to use this site you are giving us your consent to do this.