Iterating

By including a .csv or .xlsx file in Ply valuesFiles, you’re able to repeatedly execute tests for each row in a dataset. You can take advantage of this feature to perform load testing on your APIs, or simply to run with varying input permutations.

Syntax

The first row in a .csv/.xlsx values file designates value names and object structure. Take this example which includes the first few rows from ply-demo’s forties-movies.csv file:

title,         year, rating, poster,  credits[0].name, credits[0].role, credits[1].name, credits[1].role, credits[2].name, credits[2].role, credits[3].name, credits[3].role, credits[4].name,  credits[4].role, credits[5].name,  credits[5].role, webRef.site, webRef.ref, description
The Ape,       1940, 3,      ta.jpg,  William Nigh,    director,        Boris Karloff,   actor,           Maris Wrixon,    actor,           Gene O'Donnel,   actor,           Dorothy Vaughan,  actor,           Gertrude Hoffman, actor,           imdb.com,    tt0032215,  Karloff's nuanced performance as the well-meaning but demented scientist is the one redeeming feature in this cheaply made horror melodrama which provides very little in the way of horror or drama.
Before I Hang, 1940, 3.5,    bih.jpg, Nick Grinde,     director,        Boris Karloff,   actor,           Evelyn Keyes,    actor,           Bruce Bennet,    actor,           Edward Van Sloan, actor,           Ben Taggart,      actor,           imdb.com,    tt0032245,  Karloff gives great face as the kindly doctor who inadvertantly turns himself into a maniacal killer in this rather stuffy melodramatic thriller from Columbia Pictures.
Black Friday,  1940, 3,      bf.jpg,  Arthur Lubin,    director,        Boris Karloff,   actor,           Bela Lugosi,     actor,           Stanley Ridges,  actor,           Ann Nagel,        actor,           Anne Gwynne,      actor,           imdb.com,    tt0032258,  Karloff and Lugosi team up in what amounts to more of a crime drama with supernatural elements than a straight horror film.

The first data row here translates into the following values object structure:

{
  "title": "The Ape",
  "year": 1940,
  "rating": 3,
  "poster": "ta.jpg",
  "credits": [
    {
      "name": "William Nigh",
      "role": "director"
    },
    {
      "name": "Boris Karloff",
      "role": "actor"
    },
    {
      "name": "Maris Wrixon",
      "role": "actor"
    },
    {
      "name": "Gene O'Donnel",
      "role": "actor"
    },
    {
      "name": "Dorothy Vaughan",
      "role": "actor"
    },
    {
      "name": "Gertrude Hoffman",
      "role": "actor"
    }
  ],
  "webRef": {
    "site": "imdb.com",
    "ref": "tt0032215"
  },
  "description": "Karloff's nuanced performance as the well-meaning but demented scientist is the one redeeming feature in this cheaply made horror melodrama which provides very little in the way of horror or drama."
}

So then in rowwise-requests.ply.yaml request suite, we can reference these values in our createMovie request:

createMovie: # POST
  url: ${baseUrl}/movies
  method: POST
  headers:
    Accept: application/json
    Content-Type: application/json
  body: |-
    {
      "credits": [
        {
          "name": "${credits[0].name}",
          "role": "${credits[0].role}"
        },
        {
          "name": "${credits[1].name}",
          "role": "${credits[1].role}"
        },
        {
          "name": "${credits[2].name}",
          "role": "${credits[2].role}"
        },
        {
          "name": "${credits[3].name}",
          "role": "${credits[3].role}"
        },
        {
          "name": "${credits[4].name}",
          "role": "${credits[4].role}"
        },
        {
          "name": "${credits[5].name}",
          "role": "${credits[5].role}"
        }
      ],
      "poster": "${poster}",
      "rating": ${rating},
      "title": "${title}",
      "webRef": {
        "ref": "${webRef.ref}",
        "site": "${webRef.site}"
      },
      "year": ${year}
    }

Each run (one per row) executes with the corresponding values from the dataset.

Load testing

For very large datasets (on the order of many thousands of rows) such as might be used in load-testing, .csv values are recommended over .xlsx. The reason for this is that .xlsx files are always loaded into memory; whereas .csv files are streamed and buffered.

Another consideration when running Flows with large datasets: it’s best done from the command-line rather than through VS Code, which can get bogged-down listening for flow run updates.

Two config options can help control how fast requests are submitted against an API:

Another option to consider is submit, which tells Ply to skip verification of results. This can be useful in load testing so that Ply doesn’t spend time comparing results and can crank up greater throughput. On the command line, use --submit. In VS Code, click the ‘Submit’ code lens or the Submit icon: Submit Icon

Example command-line for exercising ply-demo’s movies-api.ply.flow with 264 rows of data from forties-movies.xlsx:

ply flows/movies-api.ply.flow --valuesFiles="test/values/localhost.json,test/values/forties-movies.xlsx" --submit

Parallel execution

Suites by default are executed sequentially, with Ply waiting for one suite to complete before running the next. As with non-iterating tests, the parallel config option can be used to execute suites in parallel. With rowwise values, parallel applies to how rows are handled as well (multiple rows are submitted simultaneously). When iterating in parallel there are few of things to keep in mind:

Example command-line for exercising ply-demo’s movies-api.ply.flow IN PARALLEL with 264 rows of data from forties-movies.csv:

ply flows/movies-api.ply.flow --valuesFiles="test/values/localhost.json,test/values/forties-movies.csv" --submit --parallel

Next Topic: Cases