Sequential Dataset → AWS S3
Learn how to automate migrating a z/OS sequential dataset to an AWS S3 bucket using Grace.
This tutorial will guide you through using Grace to orchestrate a simple but powerful hybrid workflow: moving a text-based sequential dataset from a z/OS mainframe to an AWS S3 bucket.
You will learn how to:
- Define a Grace job that accesses a z/OS dataset.
- Utilize Grace's
shelljob module to interact with the AWS CLI. - Manage character encoding to ensure your mainframe EBCDIC text data is correctly converted to a readable format (like UTF-8) in the cloud.
- Automate the entire extraction and load process with a single
grace runcommand.
This is a common scenario for archiving mainframe reports, making mainframe data available for cloud-based analytics, or staging data for further processing in cloud environments.
What you'll build
You'll write a grace.yml file that defines a single Grace job. This job will:
- Take a z/OS sequential dataset (containing EBCDIC text) as an input.
- Download this dataset to a local staging directory where Grace is running, converting it to your local text encoding (e.g., UTF-8).
- Upload this locally staged text file to an S3 bucket you specify.

Prerequisites
Before you begin, please ensure you have the following set up and configured:
- Grace CLI:
- If not, see Installation.
- Zowe CLI
- Install Zowe CLI (v3 LTS recommended).
- You must have a Zowe CLI
zosmfprofile that can successfully connect to your target z/OS system. This profile needs at least READ access to the sequential dataset you intend to migrate. - Verify your profile with
zowe zosmf check status --your-profile-name. - For help, see Configuration.
- AWS CLI installed and configured:
- Install the AWS CLI.
- Configure it with AWS credentials that have permission to list buckets, create buckets (optional for this tutorial if you use an existing one), and upload objects (PutObject) to S3. Typically, this involves running
aws configureand providing your Access Key ID, Secret Access Key, and default region. - This tutorial assumes usage of AWS CLI Version 2.
- Access to a z/OS system: Where your source sequential dataset resides.
- An AWS S3 bucket:
- You'll need an S3 bucket where the file will be uploaded. If you don't have one, you'll create one in the first part of this tutorial.
- A text editor: For creating and editing your
grace.ymlfile.
Prepare your mainframe data and S3 bucket
Before we define the Grace workflow, let's set up the necessary resources: a sample sequential dataset on the mainframe and an S3 bucket in AWS.
Create a sample sequential data set on z/OS
-
Prepare the JCL
Copy the following JCL and save it to a local file, for example,
create_sample_seqfile.jcl. You can place this anywhere on your local machine; Grace itself won't directly use this file in the workflow, but you'll submit it using Zowe CLI.⚠️ Ensure you replace
YOURHLQin theSYSUT2 DD DSNstatement with your actual TSO user ID or another High-Level Qualifier you are authorized to use. -
Upload and submit the JCL using Zowe CLI
- First upload the JCL to a PDS on the mainframe (e.g. your personal JCL library). Replace
YOUR.JCL.PDS(CRSEQJCL)with your actual PDS and desired member name.
- Then, submit the JCL:
- Verify that the job completes successfully (usually
MAXCC 0000). If it fails, check the job output for errors (e.g. authority issues, invalid HLQ).
You can view the created sequential dataset with:
Ensure the dataset identifier matches that of your JCL.
- First upload the JCL to a PDS on the mainframe (e.g. your personal JCL library). Replace
-
Note your dataset name
Once the job is successful, the dataset
YOURHLQ.SAMPLE.TEXTFILE(with your actual HLQ) will exist on your z/OS system. Make a note of this full DSN, as you'll need it for yourgrace.ymlfile.
Create an S3 bucket
You'll need an S3 bucket to store the data extracted from the mainframe.
-
Choose a globally unique bucket name and your desired AWS region.
-
Open your terminal and use the AWS CLI:
Make note of your exact bucket name, as you'll need it for
grace.yml.⚠️ S3 bucket names are globally unique. The
grace-demo-bucketidentifier you define will probably be something likegrace-demo-bucket-[your_name]-[some_numbers].
Initialize your Grace project
Now, let's create a new Grace project directory.
-
In your terminal, navigate to where you want to create your project and run:
-
Follow the interactive prompts:
- Workflow name: You can accept the default (
my-seq-to-s3-flow) or change it. - HLQ: Enter your typical High-Level Qualifier (e.g. your TSO ID). This will be used for
datasets.jcl,datasets.src, etc. ingrace.yml. - Profile: Enter the name of your working Zowe CLI profile.
- Workflow name: You can accept the default (
-
Enter your new project directory:
Configuring grace.yml
Open the grace.yml file that grace init created in your my-seq-to-s3-flow directory. We'll modify the config and datasets sections, and then define our single shell job.
A basic grace.yml will look something like this after grace init:
Verify/update the config block
-
profileThe
grace initcommand should have populated this with the Zowe profile you entered. Ensure it's correct. -
defaultsandconcurrencyFor this specific tutorial, the
defaults(for compiler/linker) andconcurrencysettings are not used because we are only defining a singleshelljob that doesn't compile or link code, nor does it content for mainframe job slots managed by Grace's concurrency setting.grace initdoes not generate these fields by default, so no further work is needed here.
Verify/update the datasets block
grace init should have populated the jcl, src, and loadlib fields with the HLQ you provided.
Ensure these DSNs look reasonable for your environment. Since we are not using grace deck to upload JCL or source for this tutorial (it's a shell job), these specific datasets are not critical to the workflow's function but are good to have correctly defined as a best practice.
grace initsanitizes the workflow name part of the DSN to conform to PDS member/qualifier rules (e.g. uppercasing, truncation, replacing hyphens).
Save your grace.yml file after ensuring everything looks good. In the next step, we'll write the jobs block.
Defining the shell job
Now, we will define the single job that orchestrates the data delivery. Add the following jobs block to your grace/yml file under the datasets block.
⚠️ Change
S3_BUCKET="your-s3-bucket-name"to point to the name of your S3 bucket.
⚠️ Change
path: "zos://YOURHLQ.SAMPLE.TEXTFILE"to point to the DSN of your sequential dataset on the mainframe.
Key elements in this job:
-
name: SEQTOS3: A clear name for this job -
type: shell: Specifies that this job will run local shell commands. -
shell: bash: Specifies that this job should usebashas its shell interpreter -
inputs:name: INFILE: This logical name will be exposed as$GRACE_INPUT_INFILEinside the shell script.path: "zos://YOURHLQ.SAMPLE.TEXTFILE":- The
zos://prefix tells Grace to wire this dataset to the shell job from the mainframe at runtime.
- The
encoding: text: This is critical. It instructs Grace to download the EBCDIC file from the mainframe and convert it to the host's default text encoding (typically UTF-8 or another ACII-compatible encoding), making it readable by standard local tools likeaws s3 cp.
-
with:-
inline: |We use an inline script for simplicity in this tutorial. The
|operator enables multiline script definition.-
S3_BUCKET="your-s3-bucket-name"Defines the target S3 bucket to push the dataset from the mainframe. -
S3_KEYDefines the object key (filename and path within the bucket) for the uploaded file in S3. Using a timestamp makes each upload unique. -
aws s3 cp "$GRACE_INPUT_INFILE" "s3://$S3_BUCKET/$S3_key" --metadata Content-type=text/plain- The core AWS CLI command.
- It uses the
$GRACE_INPUT_INFILEenvironment variable exposed by Grace, resolving to the input dataset we defined. --metadata Content-Type=text/plaintells S3 to treat the uploaded object as a plain text file.
-
if [ $? -eq 0 ] ... else ... exit 1; fiBasic error checking for the AWS CLI command. Crucially, exit 1 on failure tells Grace that this shell job step has failed.
-
-
For details on virtual path prefixes like
zos://and how Grace exposes environment variables for jobinputsandoutputs, see Virtual Paths & Job I/O.
To learn more about how to work with the
shelljob module, see Jobs - Shell
After adding this jobs block, your complete grace.yml should look something like this:
Save your grace.yml file. Your workflow is now fully defined!
Running the Grace workflow
With your grace.yml file configured, you're now ready to execute the workflow. For this particular workflow, since we are not generating any JCL specific to Grace's z/OS job modules (like compile or linkedit) and are not uploading local source files via src:// prefixes for z/OS jobs, the grace deck command is not necessary. Grace will wire the z/OS dataset at runtime.
-
Ensure you are in your project directory (e.g.
my-seq-to-s3-flow) in your terminal session. -
Execute the workflow
You can also add the
-vor--verboseflag for more detailed output, which is helpful for troubleshooting:
As Grace runs the SEQTOS3 job, note key steps in the output:
- Grace launching the
SEQTOS3job (🚀). - The final success messages (✅) from Grace.
If the aws s3 cp command fails (e.g. due to incorrect bucket name, permissions, or AWS CLI configuration), your script's exit 1 will cause Grace to mark the job as FAILED, and you'll see error messages.
Verifying the outcome

Once grace run completes successfully:
-
Check AWS S3:
Navigate to your S3 bucket in the AWS console or use the AWS CLI:
You should see your uploaded file, e.g.
seq_extract_YYYYMMDD_HHMMSS.txt. -
Inspect the uploaded file:
Verify that the content matches the sample data you created on the mainframe and that it's readable (i.e. correctly converted from EBCDIC to your local system's text encoding). It should look like this:
-
Review Grace logs (optional):
- Navigate to the
.grace/logs/directory in your project. - Open the latest run-specific subdirectory (e.g.
YYYYMMDDTHHMMSS_run_<workflow_uuid>). - You'll find:
SEQTOS3_shell-xxxxxx.json: The detailed JSON log for your shell job, containing thestdoutandstderrfrom your inline script.summary.json: The overall workflow summary.
- Navigate to the
Conclusion
Congratulations! You've successfully used Grace to automate the migration of a sequential text dataset from a z/OS mainframe to an AWS S3 bucket.
In this tutorial, you:
- Defined a
shelljob ingrace.yml. - Used
inputswith azos://path andencoding: textto seamlessly wire mainframe data to a local script. - Leveraged environment variables (
$GRACE_INPUT_*) within a shell script to work with data managed by Grace. - Automated against the mainframe and AWS cloud in a centralized control plane.
This simple example demonstrates a powerful pattern: using Grace to bridge mainframe systems with modern cloud services and local tooling.
Next steps to explore
- Modify the
shelljob to perform transformations on the data locally before uploading to S3. - Create a workflow that first uses a z/OS job (e.g. running a COBOL program or DFSORT) to prepare or transform data on the mainframe into a
zos-temp://dataset, and then have ashelljob download and upload that result. - Explore other Virtual Path Prefixes like
src://,file://,zos-temp://,local-temp://to learn about working with data across platforms and jobs. - Read the YAML Specification to discover more Grace features.