Contact Us 1-800-596-4880

DataWeave Output Formats and Writer Properties

DataWeave 2.2 is compatible and bundled with Mule 4.2. This version of Mule reached its End of Life on May 2, 2023, when Extended Support ended.

Deployments of new applications to CloudHub that use this version of Mule are no longer allowed. Only in-place updates to applications are permitted.

MuleSoft recommends that you upgrade to the latest version of Mule 4 that is in Standard Support so that your applications run with the latest fixes and security enhancements.

DataWeave can read and write many types of data formats, such as JSON, XML, and many others. Before you begin, note that DataWeave version 2 is for Mule 4 apps. For a Mule 3 app, refer to the DataWeave 1.0 documentation set in the Mule 3.9 documentation. For other Mule versions, you can use the version selector for the Mule Runtime table of contents.

DataWeave supports these formats (or MIME types) as input and output:

MIME Type Supported Formats

application/avro

Avro

application/csv

CSV

application/dw

DataWeave (weave) (for testing a DataWeave expression)

application/flatfile

Flat File, Cobol Copybook, Fixed Width

application/java

Java, Enum Custom Type (for Java)

application/json

JSON

application/octet-stream

Octet Stream (for binaries)

application/yaml

YAML

application/xml

XML, CData Custom Type (for XML)

application/x-ndjson

Newline Delimited JSON (Newline Delimited JSON)

application/xlsx

Excel

application/x-www-form-urlencoded

URL Encoding

multipart/*

Multipart (Form-Data)

text/plain

Text Plain (for plain text)

text/x-java-properties

Text Java Properties (Properties)

DataWeave Readers and Writers

DataWeave can read input data as a whole in-memory, in indexed fashion, and for some data formats, part-by-part by streaming the input. When attempting to read a large file, it is possible to run out of memory or to impact performance negatively. Streaming can improve performance but impacts access to file.

  • Indexed and In-Memory: Allow for random access to data because both strategies parse the entire document. For these strategies, your DataWeave script can access any part of the resulting value at any time.

    • Indexed: Uses indexes over the disk.

    • In-Memory: Parses the entire document in memory.

  • Streaming: Allows for sequential access to the file. This strategy partitions the input document into smaller items and accesses its data sequentially, storing the current item in memory. A DataWeave selector can access the portion of the file that is getting read. DataWeave supports streaming for a few formats:

Using Reader and Writer Properties

In some cases, it is necessary to modify or specify aspects of the format through format-specific properties. For example, you can specify CSV input and output properties, such as the separator (or delimiter) to use in the CSV file. For Cobol copybook, you need to specify the path to a schema file using the schemaPath property.

You can append reader properties to the MIME type (outputMimeType) attribute for certain components in your Mule app. Listeners and Read operations accept these settings. For example, this On New File listener example identifies the , separator for a CSV input file:

Example: Properties for the CSV Reader
<file:listener doc:name="On New File" config-ref="File_Config" outputMimeType='application/csv; separator=","'>
  <scheduling-strategy >
    <fixed-frequency frequency="45" timeUnit="SECONDS"/>
  </scheduling-strategy>
  <file:matcher filenamePattern="comma_separated.csv" />
</file:listener>

Note that the outputMimeType setting above helps the CSV reader interpret the format and delimiter of the input comma_separated.csv file, not the writer.

To specify the output format, you can provide the MIME type and any writer properties for the writer, such as the CSV or JSON writer used by a File Write operation. For example, you might need to write a pipe (|) delimiter in your CSV output payload, instead of some other delimiter used in the input. To do this, you append the property and its value to the output directive of a DataWeave expression. For example, this Write operation specifies the pipe as a separator:

Example: output Directive for the CSV Writer
<file:write doc:name="Write" config-ref="File_Config" path="my_transform">
  <file:content ><![CDATA[#[output application/csv separator="|" --- payload]]]></file:content>
</file:write>

The sections below list the format-specific reader and writer properties available for each supported format.

Setting MIME Types

You can specify the MIME type for the input and output data that flows through a Mule app.

For DataWeave transformations, you can specify the MIME type for the output data. For example, you might set the output header directive of an expression in the Transform Message component or a Write operation to output application/json or output application/csv.

This example sets the MIME type through a File Write operation to ensure that a format-specific writer, the CSV writer, outputs the payload in CSV format:

Example: MIME Type for the CSV Writer
<file:write doc:name="Write" config-ref="File_Config" path="my_transform">
  <file:content ><![CDATA[#[output application/csv --- payload]]]></file:content>
</file:write>

For input data, format-specific readers for Mule sources (such as the On New File listener), Mule operations (such as Read and HTTP Request operations), and DataWeave expressions attempt to infer the MIME type from metadata that is associated with input payloads, attributes, and variables in the Mule event. When the MIME type cannot be inferred from the metadata (and when that metadata is not static), Mule sources and operations allow you to specify the MIME type for the reader. For example, you might set the MIME type for the On New File listener to outputMimeType='application/csv' for CSV file input. This setting provides information about the file format to the CSV reader.

Example: MIME Type for the CSV Reader
<file:listener doc:name="On New File"
  config-ref="File_Config"
  outputMimeType='application/csv'>
</file:listener>

Note that reader settings are not used to perform a transformation from one format to another. They simply help the reader interpret the format of the input.

You can also set special reader and writer properties for use by the format-specific reader or writer of a source, operation, or component. See Using Reader and Writer Properties.

Avro

MIME type: application/avro

Avro is a data serialization system.

Writer Properties (for application/avro)

When specifying application/avro as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

schemaUrl

String

Skips null values in the specified data structure. By default, it does not skip.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

Reader Properties (for application/avro)

When defining application/avro input for the DataWeave reader, you can set the following property.

Parameter Type Default Description

schemaUrl

String

Skips null values in the specified data structure. By default, it does not skip.

Cobol Copybook

MIME Type: application/flatfile

A Cobol copybook is a type of flat file that describes the layout of records and fields in a Cobol data file.

The Transform Message component provides settings for handling the Cobol copybook format. For example, you can import a Cobol definition into the Transform Message component and use it for your Copybook transformations.

Cobol copybook in DataWeave supports files of up to 15 MB, and the memory requirement is roughly 40 to 1. For example, a 1-MB file requires up to 40 MB of memory to process, so it’s important to consider this memory requirement in conjunction with your TPS needs for large copybook files. This is not an exact figure; the value might vary according to the complexity of the mapping instructions.

Importing a Copybook Definition

When you import a Copybook definition, the Transform Message component converts the definition to a flat file schema that you can reference with schemaPath property.

To import a copybook definition:

  1. Right-click the input payload in the Transform component in Studio, and select Set Metadata to open the Set Metadata Type dialog.

    Note that you need to create a metadata type before you can import a copybook definition.

  2. Provide a name for your copybook metadata, such as copybook.

  3. Select the Copybook type from the Type drop-down menu.

  4. Import your copybook definition file.

  5. Click Select.

    Importing a Copybook Definition
    Figure 1. Importing a Copybook Definition File

For example, assume that you have a copybook definition file (mailing-record.cpy) that looks like this:

       01  MAILING-RECORD.
           05  COMPANY-NAME            PIC X(30).
           05  CONTACTS.
               10  PRESIDENT.
                   15  LAST-NAME       PIC X(15).
                   15  FIRST-NAME      PIC X(8).
               10  VP-MARKETING.
                   15  LAST-NAME       PIC X(15).
                   15  FIRST-NAME      PIC X(8).
               10  ALTERNATE-CONTACT.
                   15  TITLE           PIC X(10).
                   15  LAST-NAME       PIC X(15).
                   15  FIRST-NAME      PIC X(8).
           05  ADDRESS                 PIC X(15).
           05  CITY                    PIC X(15).
           05  STATE                   PIC XX.
           05  ZIP                     PIC 9(5).
  • Copybook definitions must always begin with a 01 entry. A separate record type is generated for each 01 definition in your copybook (there must be at least one 01 definition for the copybook to be usable, so add one using an arbitrary name at the start of the copybook if none is present). If there are multiple 01 definitions in the copybook file, you can select which definition to use in the transform from the dropdown list.

  • COBOL format requires definitions to only use columns 7-72 of each line. Data in columns 1-5 and past column 72 is ignored by the import process. Column 6 is a line continuation marker.

When you import the schema, the Transform component converts the copybook file to a flat file schema that it stores in the src/main/resources/schema folder of your Mule project. In flat file format, the copybook definition above looks like this:

form: COPYBOOK
id: 'MAILING-RECORD'
values:
- { name: 'COMPANY-NAME', type: String, length: 30 }
- name: 'CONTACTS'
  values:
  - name: 'PRESIDENT'
    values:
    - { name: 'LAST-NAME', type: String, length: 15 }
    - { name: 'FIRST-NAME', type: String, length: 8 }
  - name: 'VP-MARKETING'
    values:
    - { name: 'LAST-NAME', type: String, length: 15 }
    - { name: 'FIRST-NAME', type: String, length: 8 }
  - name: 'ALTERNATE-CONTACT'
    values:
    - { name: 'TITLE', type: String, length: 10 }
    - { name: 'LAST-NAME', type: String, length: 15 }
    - { name: 'FIRST-NAME', type: String, length: 8 }
- { name: 'ADDRESS', type: String, length: 15 }
- { name: 'CITY', type: String, length: 15 }
- { name: 'STATE', type: String, length: 2 }
- { name: 'ZIP', type: Integer, length: 5, format: { justify: ZEROES, sign: UNSIGNED } }

After importing the copybook, you can use the schemaPath property to reference the associated flat file through the output directive. For example: output application/flatfile schemaPath="src/main/resources/schemas/mailing-record.ffd"

Supported Copybook Features

Not all copybook features are supported by the Cobol Copybook format in DataWeave. In general, the format supports most common usages and simple patterns, including:

  • USAGE of DISPLAY, BINARY (COMP), COMP-5, and PACKED-DECIMAL (COMP-3). For character encoding restrictions, see Character Encodings.

  • PICTURE clauses for numeric values consisting only of:

    • '9' - One or more numeric character positions

    • 'S' - One optional sign character position, leading or trailing

    • 'V' - One optional decimal point

    • 'P' - One or more decimal scaling positions

  • PICTURE clauses for alphanumeric values consisting only of 'X' character positions

  • Repetition counts for '9', 'P', and 'X' characters in PICTURE clauses (as in 9(5) for a 5-digit numeric value)

  • OCCURS DEPENDING ON with controlVal property in schema. Note that if the control value is nested inside a containing structure, you need to manually modify the generated schema to specify the full path for the value in the form "container.value".

  • REDEFINES clause (used to provide different views of the same portion of record data - see details in section below)

Unsupported features include:

  • Alphanumeric-edited PICTURE clauses

  • Numeric-edited PICTURE clauses, including all forms of insertion, replacement, and zero suppression

  • Special level-numbers:

    • Level 66 - Alternate name for field or group

    • Level 77 - Independent data item

    • Level 88 - Condition names (equivalent to an enumeration of values)

  • SIGN clause at group level (only supported on elementary items with PICTURE clause)

  • USAGE of COMP-1 or COMP-2 and of clause at group level (only supported on elementary items with PICTURE clause)

  • VALUE clause (used to define a value of a data item or conditional name from a literal or another data item)

  • SYNC clause (used to align values within a record)

REDEFINES Support

REDEFINES facilitates dynamic interpretation of data in a record. When you import a copybook with REDEFINES present, the generated schema uses a special grouping with the name '*' (or '*1', '*2', and so on, if multiple REDEFINES groupings are present at the same level) to combine all the different interpretations. You use this special grouping name in your DataWeave expressions just as you use any other grouping name.

Use of REDEFINES groupings has higher overhead than normal copybook groupings, so MuleSoft recommends that you remove REDEFINES from your copybooks where possible before you import them into Studio.

Character Encodings

BINARY (COMP), COMP-5, or PACKED-DECIMAL (COMP-3) usages are only supported with single-byte character encodings, which use the entire range of 256 potential character codes. UTF-8 and other variable-length encodings are not supported for these usages (because they’re not single-byte), and ASCII is also not supported (because it doesn’t use the entire range). Supported character encodings include ISO-8859-1 (an extension of ASCII to full 8 bits) and other 8859 variations and EBCDIC (IBM037).

REDEFINES requires you to use a single-byte-per-character character encoding for the data, but any single-byte-per-character encoding can be used unless BINARY (COMP), COMP-5, or PACKED-DECIMAL (COMP-3) usages are included in the data.

Common Copybook Import Issues

The most common issue with copybook imports is a failure to follow the Cobol standard for input line regions. The copybook import parsing ignores the contents of columns 1-6 of each line, and ignores all lines with an '*' (asterisk) in column 7. It also ignores everything beyond column 72 in each line. This means that all your actual data definitions need to be within columns 8 through 72 of input lines.

Tabs in the input are not expanded because there is no defined standard for tab positions. Each tab character is treated as a single space character when counting copybook input columns.

Indentation is ignored when processing the copybook, with only level-numbers treated as significant. This is not normally a problem, but it means that copybooks might be accepted for import even though they are not accepted by Cobol compilers.

Both warnings and errors might be reported as a result of a copybook import. Warnings generally tell of unsupported or unrecognized features, which might or might not be significant. Errors are notifications of a problem that means the generated schema (if any) will not be a completely accurate representation of the copybook. You should review any warnings or errors reported and decide on the appropriate handling, which might be simply accepting the schema as generated, modifying the input copybook, or modifying the generated schema.

Reader Properties (for Cobol Copybook)

When defining application/flatfile input for the DataWeave reader, you can set the properties described in Reader Properties (for Flat File).

Note that schemas with type Binary or Packed don’t allow for the detection of line breaks, so setting recordParsing to lenient only allows for long records to be handled, not short ones. These schemas only work with certain single-byte character encodings (so not with UTF-8 or any multibyte format).

Writer Properties (for Cobol Copybook)

When specifying application/flatfile as the output format in a DataWeave script, you can add the properties described in Writer Properties (for Flat File) to change the way the DataWeave parser processes the data.

Example: output Directive
output application/flatfile schemaPath="src/main/resources/schemas/QBReqRsp.esl", structureIdent="QBResponse"

CSV

MIME Type: application/csv

CSV content is modeled in DataWeave as a list of objects, where every record is an object and every field in it is a property, for example:

DataWeave Script: that Outputs CSV:
%dw 2.0
output application/csv
---
[
  {
    "Name":"Mariano",
    "Last Name":"De achaval"
  },
  {
    "Name":"Leandro",
    "Last Name":"Shokida"
  }
]
CSV Output:
Name,Last Name
Mariano,De achaval
Leandro,Shokida

Reader Properties (for CSV)

In CSV, you can assign any special character as the indicator for separating fields, toggling quotes, or escaping quotes. Make sure you know what special characters are in your input so that DataWeave can interpret them correctly.

When defining application/csv input for the DataWeave reader, you can set the following property.

Parameter Type Default Description

bodyStartLineNumber

Number

0

The line number where the body starts.

escape

Char

\

Character used to escape invalid characters, such as separators or quotes within field values.

ignoreEmptyLine

Boolean

true

Ignores any empty line. Valid options: true or false

header

Boolean

true

Indicates whether the first line of the output contains header field names. . Valid options: true or false

headerLineNumber

Number

0

The line number where the header is located.

quote

Char

"

Character to use for quotes.

separator

Char

,

Character that separates one field from another field.

streaming

Boolean

false

Used for streaming input CSV. Valid options: true or false (Use only if entries are accessed sequentially.) See the streaming example, and see DataWeave Readers and Writers.

  • When header=true you can then access the fields within the input anywhere by name, for example: payload.userName.

  • When header=false you must access the fields by index, referencing first the entry and then the field, for example: payload[107][2]

Streaming Example:

By default, the CSV reader stores input data from an entire file in-memory if the file is 1.5MB or less. If the file is larger than 1.5 MB, the process writes the data to disk. For very large files, you can improve the performance of the reader by setting a streaming property to true. To demonstrate the use of this property, the next example streams a CSV file and transforms it to JSON.

<flow name="dw-streamingFlow" >
  <scheduler doc:name="Scheduler" >
    <scheduling-strategy >
      <fixed-frequency frequency="1" timeUnit="MINUTES"/>
    </scheduling-strategy>
  </scheduler>
  <file:read
     path="${app.home}/input.csv"
     config-ref="File_Config"
     outputMimeType="application/csv; streaming=true; header=true"/>
  <ee:transform doc:name="Transform Message" >
    <ee:message >
      <ee:set-payload ><![CDATA[%dw 2.0
output application/json
---
payload map ((row) -> {
zipcode: row.zip
})]]></ee:set-payload>
    </ee:message>
  </ee:transform>
  <file:write doc:name="Write"
    config-ref="File_Config1"
    path="/path/to/output/file/output.json"/>
  <logger level="INFO" doc:name="Logger" message="#[payload]"/>
</flow>
  • The example configures the HTTP listener to stream the CSV input by setting outputMimeType="application/csv; streaming=true". In the Studio UI, you can set the MIME Type on the listener to application/csv and the Parameters for the MIME Type to Key streaming with Value true. The example also sets header=true for illustration purposes, though this setting is the default.

  • The DataWeave script uses the map function in the Transform Message component to iterate over each row in the CSV payload and select the value of each field in the zip column.

  • The Write operation returns a file, output.json, which contains the result of the transformation.

  • The Logger prints the same output payload that you see in output.json.

Note that the example indicates that the input file is located in Studio project directory src/main/resources, which is the location of ${app.home}.

The structure of the CSV input looks something like the following.

CSV File Input for Streaming Example (truncated):
street,city,zip,state,beds,baths,sale_date
3526 HIGH ST,SACRAMENTO,95838,CA,2,1,Wed May 21 00:00:00 EDT 2018
51 OMAHA CT,SACRAMENTO,95823,CA,3,1,Wed May 21 00:00:00 EDT 2018
2796 BRANCH ST,SACRAMENTO,95815,CA,2,1,Wed May 21 00:00:00 EDT 2018
2805 JANETTE WAY,SACRAMENTO,95815,CA,2,1,Wed May 21 00:00:00 EDT 2018
6001 MCMAHON DR,SACRAMENTO,95824,CA,2,1,,Wed May 21 00:00:00 EDT 2018
5828 PEPPERMILL CT,SACRAMENTO,95841,CA,3,1,Wed May 21 00:00:00 EDT 2018

Note that a streamed file is typically much longer.

Output for Streaming Example:
[
  {
    "zipcode": "95838"
  },
  {
    "zipcode": "95823"
  },
  {
    "zipcode": "95815"
  },
  {
    "zipcode": "95815"
  },
  {
    "zipcode": "95824"
  },
  {
    "zipcode": "95841"
  }
]

Writer Properties (for CSV)

When specifying application/csv as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

bodyStartLineNumber

Number

0

The line number where the body starts.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

encoding

String

None

Encoding to be used by this writer, such as UTF-8.

escape

Char

\

Character used to escape an invalid character, such as occurrences of the separator or quotes within field values.

lineSeparator

String

Line separator to use when writing the CSV, for example: "\r\n"

header

Boolean

true

Indicates whether first line of the output contain header field names. Valid options: true or false

headerLineNumber

Number

0

The line number where the header is located.

ignoreEmptyLine

Boolean

true

Ignores any empty line. Valid options: true or false

quote

Char

"

The character to be used for quotes.

quoteHeader

Boolean

false

Indicates whether to quote header values. Valid options: true or false

quoteValues

Boolean

false

Indicates if every value should be quoted (even if it contains special characters within).

separator

String

,

Character that separates one field from another field.

All of these parameters are optional.

A CSV output directive example might look like this:

Example: output Directive
output application/csv separator=";", header=false, quoteValues=true

Defining a Metadata Type (for CSV)

In the Transform component, you can define a CSV type through the following methods:

  • By providing a sample file.

  • Through a graphical editor that allows you to set up each field manually.

    dataweave formats 4a556

DataWeave (weave)

MIME Type: application/dw

This format is for debugging purposes only. Performance impacts can occur if you use this format in a production environment.

The DataWeave (weave) format is the canonical format for all transformations. This format can help you understand how input data is interpreted before it is transformed to a new format.

This format is intended only to help you debug the results of DataWeave transformations. It is significantly slower than other formats. It is not recommended to be used in production applications because it can impact the performance.

This example shows how XML input is expressed in the DataWeave format.

Input XML
<employees>
  <employee>
    <firstname>Mariano</firstname>
    <lastname>DeAchaval</lastname>
  </employee>
  <employee>
    <firstname>Leandro</firstname>
    <lastname>Shokida</lastname>
  </employee>
</employees>
Output: in DataWeave Format
{
  employees: {
    employee: {
      firstname: "Mariano",
      lastname: "DeAchaval"
    },
    employee: {
      firstname: "Leandro",
      lastname: "Shokida"
    }
  }
} as Object {encoding: "UTF-8", mimeType: "text/xml"}

Writer Properties (for weave)

When specifying application/dw as the output format in a DataWeave script, you can add the following properties to change the way the parser processes data.

Parameter Type Default Description

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

ignoreSchema

Boolean

false

Indicates whether the writer will ignore the schema. Valid options: true or false

indent

String

The string that is going to be used as indent.

maxCollectionSize

Number

-1

The maximum number of elements allowed in an Array or an Object. -1 means that no limitation is set.

Excel

MIME Type: application/xlsx

Only .xlsx files are supported (Excel 2007). .xls files are not supported by Mule.

An Excel workbook is a sequence of sheets. In DataWeave, this is mapped to an object where each sheet is a key. Only one table is allowed per Excel sheet. A table is expressed as an array of rows. A row is an object where its keys are the columns and the values the cell content.

dataweave formats exceltable
Figure 2. Input:
DataWeave Script: that Outputs XLSX:
output application/xlsx header=true
---
{
  Sheet1: [
    {
      Id: 123,
      Name: George
    },
    {
      Id: 456,
      Name: Lucas
    }
  ]
}

For another example, see Look Up Data in an Excel (XLSX) File.

Reader Properties (for Excel)

When defining application/xlsx input for the DataWeave reader, you can set the following property.

Parameter Type Default Description

header

Boolean

true

Indicates whether the Excel table contains headers. Valid options: true or false

ignoreEmptyLine

Boolean

true

Indicates whether to ignore empty line. Valid options: true or false

streaming

Boolean

false

Introduced in Mule 4.2.2: Streaming is intended for processing a large file. When streaming is enabled, the reader accesses each row sequentially, keeping one row in memory at a time, instead of making all data available at once. Streaming does not permit random access to rows in the file.

tableOffset

String

None

The position of the first cell in the table (<Column><Row> example A1).

zipBombCheck

Boolean

true

If set to false, the zip bomb check is turned off. Valid options: true or false

Streaming Example:

By default, the Excel reader stores input data from an entire file in-memory if the file is 1.5MB or less. If the file is larger than 1.5 MB, the process writes the data to disk. For very large files, you can improve the performance of the reader by setting a streaming property to true. To demonstrate the use of this property, the next example streams a XLSX file and transforms it to JSON.

The following example streams an Excel file and transforms it to JSON.

<http:listener-config
    name="HTTP_Listener_config"
    doc:name="HTTP Listener config" >
  <http:listener-connection host="0.0.0.0" port="8081" />
</http:listener-config>
<flow name="streaming_flow" >
  <http:listener
    doc:name="Listener"
    config-ref="HTTP_Listener_config"
    path="/"
    outputMimeType="application/xlsx; streaming=true"/>
  <ee:transform doc:name="Transform Message" >
    <ee:message >
      <ee:set-payload ><![CDATA[%dw 2.0
output application/json
---
payload."Sheet Name" map ((row) -> {
    foo: row.a,
    bar: row.b
})]]></ee:set-payload>
    </ee:message>
  </ee:transform>
</flow>

The example:

  • Configures the HTTP listener to stream the XLSX input by setting outputMimeType="application/xlsx; streaming=true". In the Studio UI, you can use the MIME Type on the listener to application/xlsx and the Parameters for the MIME Type to Key streaming and Value true.

  • Uses a DataWeave script in the Transform Message component to iterate over each row in the XLSX payload (an XLSX sheet called "Sheet Name") and select the values of each cell in the row (using row.a, row.b). It assumes columns named a and b and maps the values from each row in those columns into foo and bar, respectively.

Writer Properties (for Excel)

When specifying application/xlsx as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

header

Boolean

true

Indicates whether the Excel table contains headers. Valid options: true or false When there are no headers, column names are used (for example, A, B, C, …​).

ignoreEmptyLine

Boolean

true

a Indicates whether to ignore empty line. Valid options: true or false

tableOffset

String

None

The position of the first cell in the table (<Column><Row> example A1).

zipBombCheck

Boolean

true

If set to false, the zip bomb check is turned off. Valid options: true or false

All of these parameters are optional. A DataWeave output directive for Excel might look like this:

Example: output Directive
output application/xlsx header=true

Defining a Metadata Type (for Excel)

In the Transform component, you can define a Excel type through the following method:

  • Through a graphical editor that allows you to set up each field manually.

dataweave formats excel metadata

Fixed Width

MIME Type: application/flatfile

Fixed width types are technically considered a type of Flat File format, but when selecting this option, the Transform component offers you settings that are better tailored to the needs of this format.

Fixed width in DataWeave supports files of up to 15 MB, and the memory requirement is roughly 40 to 1. For example, a 1-MB file requires up to 40 MB of memory to process, so it’s important to consider this memory requirement in conjunction with your TPS needs for large fixed width files. This is not an exact figure; the value might vary according to the complexity of the mapping instructions.

Reader Properties (for Fixed Width)

When defining application/flatfile input for the DataWeave reader, you can set the properties described in Reader Properties (for Flat File).

Note that schemas with type Binary or Packed don’t allow for the detection of line breaks, so setting recordParsing to lenient only allows for long records to be handled, not short ones. These schemas only work with certain single-byte character encodings (so not with UTF-8 or any multibyte format).

Writer Properties (for Fixed Width)

When specifying application/flatfile as the output format in a DataWeave script, you can add the properties described in Writer Properties (for Flat File) to change the way the DataWeave parser processes the data.

All of the properties are optional.

A DataWeave output directive might look like this:

Example: output Directive
output application/flatfile schemaPath="src/main/resources/schemas/payment.ffd", encoding="UTF-8"

Defining a Metadata Type (for Fixed Width)

In the Transform component, you can define a Fixed Width type through the following methods:

  • By providing a sample file.

  • By pointing to a Flat File schema file.

  • Through a graphical editor that allows you to set up each field manually.

    dataweave formats 27b3c

Flat File

MIME Type: application/flatfile

Flat file supports multiple types of fixed width records within a single message. The schema structure allows you to define how different record types are distinguished, and how the records are logically grouped.

Flat file in DataWeave supports files of up to 15 MB, and the memory requirement is roughly 40 to 1. For example, a 1-MB file requires up to 40 MB of memory to process, so it’s important to consider this memory requirement in conjunction with your TPS needs for large flat files. This is not an exact figure; the value might vary according to the complexity of the mapping instructions.

Reader Properties (for Flat File)

When defining application/flatfile input for the DataWeave reader, you can set the following property.

Parameter Type Default Description

enforceRequires

Boolean

false

Error if required value missing. Valid options: true or false

missingValues

String

nulls for copybook schema, spaces otherwise

Fill character used to represent missing values. To represent missing values in the input data, you can use:

  • none: Treat all data as actual values

  • spaces: Interpret a field consisting of only spaces as a missing value

  • zeroes: Interpret numeric fields consisting of only '0' characters and character fields consisting of only spaces as missing values

  • nulls: Interpret a field consisting only of 0 bytes as a missing value

recordParsing

String

strict

Expected separation between lines/records:

  • strict: line break expected at exact end of each record

  • lenient: line break used but records may be shorter or longer than schema specifies

  • noTerminator: records follow one another with no separation

  • singleRecord: entire input is a single record

schemaPath

String

None

Schema definition. Location in your local disk of the schema file used to parse your input.

segmentIdent

String

None

Segment identifier in the schema for fixed width or copybook schemas (only needed when parsing a single segment/record definition and if the schema includes multiple segment definitions).

structureIdent

String

None

Structure identifier in schema for flatfile schemas (only needed when parsing a structure definition, and if the schema includes multiple structure definitions)

truncateDependingOn

Boolean

false

Truncate COBOL copybook DEPENDING ON values to length used. Valid options: true or false

zonedDecimalStrict

Boolean

false

Use the strict ASCII form of sign encoding for COBOL copybook zoned decimal values. Valid options: true or false

Note that schemas with type Binary or Packed don’t allow for line break detection, so setting recordParsing to lenient only allows long records to be handled, not short ones. These schemas also currently only work with certain single-byte character encodings (so not with UTF-8 or any multibyte format).

Writer Properties (for Flat File)

When specifying application/flatfile as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

encoding

String

None

Encoding to be used by this writer, such as UTF-8.

enforceRequires

Boolean

false

Error if a required value is missing. Valid options: true or false

missingValues

String

NULLS for copybook schema, SPACES otherwise

Fill character used to represent missing values:

  • NONE: Write nothing for missing values

  • SPACES: Fill field with spaces

  • ZEROES: Fill numeric fields with '0' characters and character fields with space characters

  • NULLS: Fill field with 0 bytes

recordTerminator

String

System property line.separator

Record separator line break. Valid options:

  • lf

  • cr

  • crlf

  • None

Note that in Mule versions 4.0.4 and older, this is only used as a separator when there are multiple records. Values translate directly to character codes (none leaves no termination on each record).

schemaPath

String

None

Schema definition. Path where the schema file to be used is located.

segmentIdent

String

None

Segment identifier in the schema for fixed width or copybook schemas (only needed when writing a single segment/record definition, and if the schema includes multiple segment definitions).

structureIdent

String

None

Structure identifier in schema for flatfile schemas (only needed when writing a structure definition and if the schema includes multiple structure definitions)

trimValues

Boolean

false

Trim string values longer than the field length by truncating trailing characters. Valid options: true or false

truncateDependingOn

Boolean

false

Truncate DEPENDING ON COBOL copybook values to length used. Valid options: true or false

zonedDecimalStrict

Boolean

false

Use the strict ASCII form of sign encoding for COBOL copybook zoned decimal values. Valid options: true or false.

DataWeave Script that Outputs a Flat File:
%dw 2.0
output application/flatfile schemaPath="src/main/resources/test-data/QBReqRsp.esl", structureIdent="QBResponse"
---
payload

Defining a Metadata Type (for Flat File)

In the Transform component, you can define a Flat File type by pointing to a schema file.

Multipart (Form-Data)

MIME Type: multipart/form-data

DataWeave supports multipart subtypes, in particular form-data. These formats enable you to handle several different data parts in a single payload, regardless of the format each part has. To distinguish the beginning and end of a part, a boundary is used and metadata for each part can be added through headers.

Below you can see a raw multipart/form-data payload with a 34b21 boundary consisting of 3 parts:

  • a text/plain one named text

  • an application/json file (a.json) named file1

  • a text/html file (a.html) named file2

Raw Multipart
--34b21
Content-Disposition: form-data; name="text"
Content-Type: text/plain

Book
--34b21
Content-Disposition: form-data; name="file1"; filename="a.json"
Content-Type: application/json

{
  "title": "Java 8 in Action",
  "author": "Mario Fusco",
  "year": 2014
}
--34b21
Content-Disposition: form-data; name="file2"; filename="a.html"
Content-Type: text/html

<!DOCTYPE html>
<title>
  Available for download!
</title>
--34b21--

Within a DataWeave script, you can access and transform data from any of the parts by selecting the parts element. Navigation can be array based or key based when parts feature a name to reference them by. The part’s data can be accessed through the content keyword while headers can be accessed through the headers keyword.

The following script, for example, would produce Book:a.json considering the previous payload:

Reading Multipart Content:
%dw 2.0
output text/plain
---
payload.parts.text.content ++ ':' ++ payload.parts[1].headers.'Content-Disposition'.filename

You can generate multipart content where DataWeave builds an object with a list of parts, each containing its headers and content. The following DataWeave script produces the raw multipart data (previously analyzed) if the HTML data is available in the payload.

Writing Multipart Content:
%dw 2.0
output multipart/form-data
boundary='34b21'
---
{
  parts : {
    text : {
      headers : {
        "Content-Type": "text/plain"
      },
      content : "Book"
    },
    file1 : {
      headers : {
        "Content-Disposition" : {
            "name": "file1",
            "filename": "a.json"
        },
        "Content-Type" : "application/json"
      },
      content : {
        title: "Java 8 in Action",
        author: "Mario Fusco",
        year: 2014
      }
    },
    file2 : {
      headers : {
        "Content-Disposition" : {
            "filename": "a.html"
        },
        "Content-Type" : payload.^mimeType
      },
      content : payload
    }
  }
}

Notice that the key determines the part’s name if it is not explicitly provided in the Content-Disposition header, and note that DataWeave can handle content from supported formats, as well as references to unsupported ones, such as HTML.

Reader Properties (for Multipart)

When defining multipart/form-data input for the DataWeave reader, you can set the following property.

You can set the boundary for the reader to use when it analyzes the data.

Parameter Type Default Description

boundary

String

None

The multipart boundary value. A String to delimit parts.

Note that in the DataWeave read function, you can also pass the property as an optional parameter. The scope of the property is limited to the DataWeave script where you call the function.

Writer Properties (for Multipart)

When specifying multipart/form-data as the output format in a DataWeave script, you can add the following property to change the way the DataWeave parser processes data.

Example: output Directive
output multipart/form-data

In the output directive, you can also set a property for the writer to use when it outputs the data in the specified format.

Parameter Type Default Description

boundary

String

None

The multipart boundary value. A String to delimit parts.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

For example, if a boundary is 34b21, then you can pass this:

Example: output Directive
output multipart/form-data boundary=34b21

Note that in the DataWeave write function, you can also pass the property as an optional parameter. The scope of the property is limited to the DataWeave script where you call the function.

Multipart is typically, but not exclusively, used in HTTP where the boundary is shared through the Content-Type header, both for reading and writing content.

Java

MIME Type: application/java

This table shows the mapping between Java objects to DataWeave types.

Java Type DataWeave Type

Collections/Array/Iterator/Iterable

Array

String/CharSequence/Char/Enum/Class

String

int/Short/Long/BigInteger/Flat/Double/BigDecimal

Number

Calendar/XmlGregorianCalendar

DateTime

TimeZone

TimeZone

sql.Date/util.Date

Date

Bean/Map

Object

InputStream/Array[Byte]

Binary

java.lang.Boolean

Boolean

Writer Properties (for Java)

When specifying application/java as the `output format in a DataWeave script, you can add the following property to change the way the DataWeave parser processes data.

Parameter Type Default Description

duplicateKeyAsArray

Boolean

False

If duplicate keys are detected in an object, the writer will change the value to an array with all those values. Valid options: true or false

writeAttributes

Boolean

false

If a key has attributes, it will put them as children key-value pairs of the key that contains them. The attribute key name will start with @. Valid options: true or false

Custom Types (for Java)

There are a couple of custom Java types:

  • class

  • Enum

Metadata Property class (for Java)

Java developers use the class metadata key as a hint for what class needs to be created and sent as an input. If this is not explicitly defined, DataWeave tries to infer from the context or it assigns it the default values:

  • java.util.HashMap for objects

  • java.util.ArrayList for lists

%dw 2.0
type user = Object { class: "com.anypoint.df.pojo.User"}
output application/json
---
{
  name : "Mariano",
  age : 31
} as user

The code above defines the type of the required input as an instance of com.anypoint.df.pojo.User.

Enum Custom Type (for Java)

In order to put an enum value in a java.util.Map, the DataWeave Java module defines a custom type called Enum. It allows you to specify that a given string should be handled as the name of a specified enum type. It should always be used with the class property with the Java class name of the enum.

Defining a Metadata Type (for Java)

In the Transform component, you can define a Java type through the following method:

  • By providing a sample object

JSON

MIME Type: application/json

Writer Properties (for JSON)

When specifying application/json as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed.

duplicateKeyAsArray

Boolean

false

If duplicate keys are detected in an object, the write will change the value to an array with all those values. Valid options: true or false Note that JSON language does not allow duplicate keys with one same parent, so the duplication usually raises an exception.

encoding

String

UTF-8

The character set to use for the output.

indent

Boolean

true

Indicates whether to indent the JSON code for better readability or to compress the JSON into a single line. Valid options: true or false

skipNullOn

String

None

Skips null values in the specified data structure. By default it does not skip. Valid options: arrays, objects, or everywhere. See Skip Null On (for JSON).

Example: output Directive
output application/json indent=false, skipNullOn="arrays"

Reader Properties (for JSON)

When defining application/json input for the DataWeave reader, you can set the following property.

Parameter Type Default Description

streaming

Boolean

false

Used for streaming input. Use only if entries are accessed sequentially. Valid options: true or false. The input must be a top-level array. For more on streaming in DataWeave, see DataWeave Readers and Writers.

To demonstrate streaming, the following example streams a JSON file by reading each element in an array one at a time.

Streaming Example:
<file:config name="File_Config" doc:name="File Config" />
<flow name="dw-streaming-jsonFlow" >
  <scheduler doc:name="Scheduler" >
    <scheduling-strategy >
      <fixed-frequency frequency="1" timeUnit="MINUTES"/>
    </scheduling-strategy>
  </scheduler>
  <file:read doc:name="Read"
     config-ref="File_Config"
     path="${app.home}/myjsonarray.json"
     outputMimeType="application/json; streaming=true"/>
  <ee:transform doc:name="Transform Message" >
    <ee:message >
      <ee:set-payload ><![CDATA[%dw 2.0
output application/json
---
payload.myJsonExample map ((element) -> {
returnedElement : element.zipcode
})]]></ee:set-payload>
    </ee:message>
  </ee:transform>
  <file:write doc:name="Write"
    path="/path/to/output/file/output.json"
    config-ref="File_Config1"/>
  <logger level="INFO" doc:name="Logger" message="#[payload]"/>
</flow>
  • The streaming example configures the HTTP listener to stream the JSON input by setting outputMimeType="application/json; streaming=true". In the Studio UI, you can set the MIME Type on the listener to application/json and the Parameters for the MIME Type to Key streaming and Value true.

  • The DataWeave script in the Transform Message component iterates over the array in the input payload and selects its zipcode values.

  • The Write operation returns a file, output.json, which contains the result of the transformation.

  • The Logger prints the same output payload that you see in output.json.

The JSON input payload looks like the following.

JSON Input for Streaming Example (truncated):
{ "myJsonExample" : [
    {
      "name" : "Shoki",
      "zipcode": "95838"
    },
    {
      "name" : "Leandro",
      "zipcode": "95823"
    },
    {
      "name" : "Mariano",
      "zipcode": "95815"
    },
    {
      "name" : "Cristian",
      "zipcode": "95815"
    },
    {
      "name" : "Kevin",
      "zipcode": "95824"
    },
    {
      "name" : "Stanley",
      "zipcode": "95841"
    }
  ]
}
Output for JSON Streaming Example:
[
  {
    "returnedElement": "95838"
  },
  {
    "returnedElement": "95823"
  },
  {
    "returnedElement": "95815"
  },
  {
    "returnedElement": "95815"
  },
  {
    "returnedElement": "95824"
  },
  {
    "returnedElement": "95841"
  }
]

Skip Null On (for JSON)

You can use the skipNullOn writer property to omit null values from arrays, objects, or both.

When set to:

  • arrays

    Ignore and omit null values from JSON output, for example, output application/json skipNullOn="arrays".

  • objects

    Ignore an object that has a null value. The output contains an empty object ({}) instead of the object with the null value, for example, output application/json skipNullOn="objects".

  • everywhere

    Apply skipNullOn to arrays and objects, for example: output application/json skipNullOn="everywhere".

Defining a Metadata Type for JSON

In the Transform component, you can define a JSON type through the following methods:

  • By providing a sample file

  • By pointing to a schema file

Newline Delimited JSON

MIME type: application/x-ndjson

Writer Properties (for ndjson)

When specifying application/x-ndjson as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

writeAttributes

Boolean

false

Valid options: true or false

encoding

String

bufferSize

Number

8192

Size of the buffer writer.

skipNullOn

String

Valid options: arrays or objects

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

Reader Properties (for ndjson)

When defining application/x-ndjson input for the DataWeave reader, you can set the following property.

Parameter Type Default Description

skipInvalid

Boolean

false

Valid options: true or false

ignoreEmptyLine

Boolean

true

Valid options: true or false

Octet Stream

MIME Type: application/octet-stream

Writer Properties (for octet-stream)

When specifying application/octet-stream as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

Text Plain

MIME Type: text/plain

Writer Properties (for text/plain)

When specifying text/plain as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

encoding

String

None

Encoding for the writer to use.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

Text Java Properties

MIME Type: text/x-java-properties

Writer Properties (for properties)

When defining text/x-java-properties output in the DataWeave output directive, you can change the way the parser behaves by adding optional properties.

Parameter Type Default Description

encoding

String

None

Encoding for the writer to use.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

XML

MIME Type: application/xml

The XML data structure is mapped to DataWeave objects that can contain other objects as values to their keys. Repeated keys are supported.

Input
<users>
  <company>MuleSoft</company>
  <user name="Leandro" lastName="Shokida"/>
  <user name="Mariano" lastName="Achaval"/>
</users>
DataWeave Script:
{
  users: {
    company: "MuleSoft",
    user @(name: "Leandro",lastName: "Shokida"): "",
    user @(name: "Mariano",lastName: "Achaval"): ""
  }
}

Reader Properties (for XML)

When defining application/xml input for the DataWeave reader, you can set the following properties.

Parameter Type Default Description

maxEntityCount

Number

1

The maximum number of entity expansions. The limit is in place to avoid Billion Laughs attacks.

indexedReader

Boolean

true

If the indexed XML reader should be used when the threshold is reached. Valid options: true or false accepted in an XML attribute. Available since Mule 4.2.1.

nullValueOn

String

blank

If a tag with empty or blank text should be read as null. Valid options: empty, none, or blank

externalEntities

Boolean

false

Indicates whether external entities should be processed or not. By default this is disabled to avoid XXE attacks. Valid options: true or false

supportDtd

Boolean

false

Enable or disable DTD support. Disabling skips (and does not process) internal and external subsets. Valid Options are true or false. You can also enable this property by setting the Mule system property com.mulesoft.dw.xml.supportDTD. Note that the default for this property changed from true to false in Mule version 4.2.2-20210419.

Writer Properties (for XML)

When specifying application/xml as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

bufferSize

Number

8192

Size of the buffer writer.

encoding

String

None

Encoding for the writer to use.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false Valid options: true or false. Available since Mule 4.2.1.

indent

Boolean

true

Indicates whether to indent the output. Valid options: true or false

inlineCloseOn

String

empty

When the writer should use inline close tag. Valid options: empty or none

onInvalidChar

String

None

Valid options: base64, ignore, or none

writeNilOnNull

Boolean

false

Whether to write a nil attribute when the value is null. Valid options: true or false

skipNullOn

String

None

Skips null values in the specified data structure. By default it does not skip. Valid options: elements, attributes, or everywhere. See Skip Null On (for XML)

writeDeclaration

Boolean

true

Indicates whether to write the XML header declaration. Valid options: true or false

Example: output Directive
output application/xml indent=false, skipNullOn="attributes"

The inlineCloseOn parameter defines whether the output is structured like this (the default):

<someXml>
  <parentElement>
    <emptyElement1></emptyElement1>
    <emptyElement2></emptyElement2>
    <emptyElement3></emptyElement3>
  </parentElement>
</someXml>

It can also be structured like this (set with a value of empty):

<payload>
  <someXml>
    <parentElement>
      <emptyElement1/>
      <emptyElement2/>
      <emptyElement3/>
    </parentElement>
  </someXml>
</payload>

Skip Null On (for XML)

You can specify whether your transform generates an outbound message that contains fields with "null" values, or if these fields are ignored entirely. This can be set through an attribute in the output directive named skipNullOn, which can be set to three different values: elements, attributes, or everywhere.

When set to:

  • elements: A key:value pair with a null value is ignored.

  • attributes: An XML attribute with a null value is skipped.

  • everywhere: Apply this rule to both elements and attributes.

Defining a Metadata Type (for XML)

In the Transform component, you can define a XML type through the following methods:

  • By providing a sample file

  • By pointing to a schema file

CData Custom Type (for XML)

MIME Type: application/xml

CData is a custom data type for XML that is used to identify a CDATA XML block. It can tell the writer to wrap the content inside CDATA or to check if the input string arrives inside a CDATA block. CData inherits from the type String.

DataWeave Script::
%dw 2.0
output application/xml
---
{
  users:
  {
    user : "Mariano" as CData,
    age : 31 as CData
  }
}
Output:
<?xml version="1.0" encoding="UTF-8"?>
<users>
  <user><![CDATA[Mariano]]></user>
  <age><![CDATA[31]]></age>
</users>

URL Encoding

MIME Type: application/x-www-form-urlencoded

A URL encoded string is mapped to a DataWeave object:

  • You can read the values by their keys using the dot or star selector.

  • You can write the payloads by providing a DataWeave object.

Here is an example of x-www-form-urlencoded data:

Data
key=value&key+1=%40here&key=other+value&key+2%25

The following DataWeave script produces the data above:

DataWeave Object
output application/x-www-form-urlencoded
---
{
  "key" : "value",
  "key 1": "@here",
  "key" : "other value",
  "key 2%": null
}

You can read in the Data above as input to the DataWeave script in the next example to return value@here as the result.

DataWeave Script:
output text/plain
---
payload.*key[0] ++ payload.'key 1'

Note that there are no reader properties for URL encoded data.

Writer (for URL Encoded Data)

When specifying application/x-www-form-urlencoded as the `output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

encoding

String

None

Encoding for the writer to use.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

Examples: output Directive
  • output application/x-www-form-urlencoded

  • output application/x-www-form-urlencoded encoding="UTF-8", bufferSize="500"

Note that in the DataWeave write function, you can also pass the property as an optional parameter. The scope of the property is limited to the DataWeave script where you call the function.

YAML

MIME Type: application/yaml

Writer Properties (for YAML)

When specifying application/yaml as the output format in a DataWeave script, you can add the following properties to change the way the DataWeave parser processes data.

Parameter Type Default Description

encoding

String

UTF-8

Encoding for the writer to use.

bufferSize

Number

8192

Size of the buffer writer.

deferred

Boolean

false

When set to true, DataWeave generates the output as a data stream, and the script’s execution is deferred until it is consumed. Valid options: true or false

skipNullOn

String

None

Skips null values in the specified data structure. By default it does not skip. Valid options: arrays, objects, or everywhere