How to collaborate to access S3 files from Lambda (Node.js)

How to collaborate to access S3 files from Lambda (Node.js)

How to access S3 files from Lambda (Node.js).

S3 is simply file storage. You can put files in S3 in a structure similar to Windows Explorer, such as C:⌘a⌘b⌘c.

You create a bucket in S3, which you can imagine as something like the C drive.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Click on “Create Bucket.”

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Here, we have created a bucket called “sde”.

If you access sde, you can create a folder and try to create it.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Here we have created a folder called “a”.

Further folders are created as a/b/c.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Now we have created a folder structure called a/b/c in a bucket called sde.

Let’s place a .json file with JSON objects defined under it.

Click “Upload.”

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Drag and drop locally located files into this window.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

After placing the file, it will look like the above and click “Next”.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Click “Next” as it is.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Click “Next.”

Lambda(Node.js)からS3のファイルにアクセスする連携方法

Click “Upload.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

We can now upload sample.json under the a/b/c folder of the bucket named sde.

The following is an example of accessing an s3 file from lambda. The listObjectsV2 method is used to retrieve a list of files.

const AWS = require('aws-sdk');
const s3 = new AWS.S3({'region':'ap-northeast-1'});

exports.handler = (event, context, callback) => {
  let params = {
    'Bucket': 'sde',
    'Prefix': 'a/b/c'
  }
  s3.listObjectsV2(params, function(err, data) {
    data.Contents.forEach(function(elem){
      console.log(elem.Key);
    });
    response = data.Contents;
    callback(null, response);
  });
};

I thought I could get only the json files under /a/b/c/ in bucket sde, but I get a folder and then the files in that folder.

[
  {
    "Key": "a/b/c/",
    "LastModified": "2017-12-03T00:06:15.000Z",
    "ETag": "\"d41d8cd98f00b204e9432298ecf8427e\"",
    "Size": 0,
    "StorageClass": "STANDARD"
  },
  {
    "Key": "a/b/c/sample.json",
    "LastModified": "2017-12-03T00:19:39.000Z",
    "ETag": "\"593902c4008cdb4c567342badee01680\"",
    "Size": 41,
    "StorageClass": "STANDARD"
  }
]

If you want to get only files under a folder, specify StartAfter.

If you do not need a folder in Contents.Key, specify StartAfter, and AWS S3 will retrieve the list after this specified key.

const AWS = require('aws-sdk');
const S3 = new AWS.S3({'region':'us-east-1'});

exports.handler = (event, context, callback) => {
  let params = {
    'Bucket': 'sde',
    'Prefix': 'a/b/c/',
    'StartAfter':'a/b/c/' // Get only under a/b/c/.
  }
  S3.listObjectsV2(params, function(err, data) {
    data.Contents.forEach(function(elem){
      console.log(elem.Key);
    });
    response = data.Contents;
    callback(null, response);
  });
};

The result is as follows: only the full path of the file is retrieved. (this still has problems)

[
  {
    "Key": "a/b/c/sample.json",
    "LastModified": "2017-12-03T00:19:39.000Z",
    "ETag": "\"593902c4008cdb4c200892badee01680\"",
    "Size": 41,
    "StorageClass": "STANDARD"
  }
]

Reference Site

Do not create folders from the management console (conclusion)

Even if StartAfter is specified, if a folder exists under it, a folder such as a/b/c/d will be obtained. This is because a folder is a 0-byte object.

This can be solved by uploading the files with the CLI, since 0-byte folders are not created in the first place if the files are uploaded with a program, CLI cp, etc.

If you have created a folder from the management console, just delete it with the rm command since folder = object.

The bucket is an example command to delete all folders (objects) below.

aws s3 rm s3://bucket_name/ --exclude '*.*' --recursive

If you are in trouble with all the objects coming in by mistake, you can check them with the --dryrun option at the end and then run the program.

Maximum number of objects that can be retrieved with listObjectsV2 is 1000

The maximum number of objects that can be retrieved with listObjectsV2 is 1000. if there are more than 1000 objects, a token is set to NextContinuationToken. IsTruncated=true.

If you want to get the next 1000 objects, specify ContinuationToken: token as an argument to listObjectsV2. IsTruncated=false when the last 1000 items are retrieved.

Since 1000 is the default (maximum) value, you can change the maximum number of items to be retrieved at one time by specifying MaxKeys: 100 as an argument to listObjectsV2. However, even if you specify a value of 1001 or more, the maximum number of items retrieved at one time will be 1000.

Uploading files with AWS-SDK’s s3.putObject method (node.js v12.x)

s3 has a putObject method, which can be used to upload a file with a specified object key in the S3 bucket.

const AWS = require('aws-sdk');
const s3 = new AWS.S3({'region':'ap-southeast-1'});
AWS.config.update({region: 'ap-southeast-1'});

exports.handler = async (event) => {
  const params = {
    Bucket:'test-bucket',
    Key:'dir1/subdir1/sample.json',
    Body: JSON.stringify({name: 'takahashi', age: 50},undefined,1),
    ContentType: 'application/json'
  }
  const json = await s3.putObject(params).promise().catch(err=>{
    throw new Error(`Error`)
  })
}

This time, since it is a JSON file, ContentType: 'application/json' is used.

Specify the bucket name in Bucket and the object key name in Key.

The content of sample.json is specified in Body. At this time, it must be a string. (In the case of a JSON file).

Therefore, JSON.stringify is done, but to format it as an object, the second and third arguments are specified for you.

argument value
first parameter object
second argument undefined
third argument 1

You can now confirm that the file uploaded to S3 is in object format.

Lambda(Node.js)からS3のファイルにアクセスする連携方法

This is just a visual problem, and you have to give it as JSON.parse(file.Body) when you get it with getObject in s3.

Putting a 0-byte file

If you want to create a 0-byte file with putObject, you can put a 0-byte file by setting the value of the Body property to null. would result in a 2Byte file.

const AWS = require('aws-sdk');
const S3 = new AWS.S3({'region':'us-east-1'});

exports.handler = async (event) => {
  const param = {
    'Bucket': 'bucketname',
    'Key': 'var/tmp/a.json',
    'Body': null // Leave as null
  }
  await S3.putObject(param).promise()
}

Deleting a non-existent file with AWS-SDK’s s3.deleteObject method does not result in an error (node.js v12.x)

If the file does not exist in getObject, etc., an error will occur, but deleting a file that does not exist with the deleteObject method does not seem to cause an error.

The following is an example of execution. await – does not go into catch of catch.

const AWS = require('aws-sdk');
const s3 = new AWS.S3({'region':'ap-southeast-1'});
AWS.config.update({region: 'ap-southeast-1'});

exports.handler = async (event) => {

  const deletes = await s3.deleteObject(
    {
      Bucket:'test-bucket', // Buckets that exist
      Key:'test/sample.json' // nonexistent file
    }).promise().catch(err=>{
      throw new Error(err) // Even if test/sample.json does not exist, it is not in the CATCH clause.
    })
}

In the deleteObject method, if a bucket that does not exist is specified, it goes into the CATCH clause.

const AWS = require('aws-sdk');
const s3 = new AWS.S3({'region':'ap-southeast-1'});
AWS.config.update({region: 'ap-southeast-1'});

exports.handler = async (event) => {

  const deletes = await s3.deleteObject(
  {
    Bucket:'test-bucketxxx', // Nonexistent bucket
    Key:'test/sample.json'
  }).promise().catch(err=>{
    throw new Error(err) // enter into a CACH clause
  })
}

コメント

Discover more from 株式会社CONFRAGE ITソリューション事業部

Subscribe now to keep reading and get access to the full archive.

Continue reading

Copied title and URL