S3 is NOT a DB
Simple interface to using Amazon S3 as a database. :warning: Work In Progress :warning:
Usage
// Model: Foo (Prefix root)
const modelFoo = new Model('foo');
modelFoo.parent = null;
modelFoo.fields = ['foo1', 'foo2', 'foo3'];
// Model: FooBar (relationship)
const modelBar = new Model('bar');
modelBar.parent = modelFoo;
modelBar.fields = ['bar1', 'bar2', 'bar3'];
class Storage extends Bucket {
models = [modelFoo, modelBar];
}
..
const storage = new Storage();
const client = storage.config({
bucket: 's3-is-not-a-db',
region: 'us-east-1'
});
// Prefix: <Bucket>/foo/00112233-4455-6677-8899-aabbccddeeff
const data = await client.Foo.fetch('00112233-4455-6677-8899-aabbccddeeff');
await client.Foo.write('00112233-4455-6677-8899-aabbccddeeff', {...data, foo1: 'newValue'});
// Prefix: <Bucket>/foo/bar/00112233-4455-6677-8899-aabbccddeeff
const data = await client.FooBar.fetch('00112233-4455-6677-8899-aabbccddeeff');
await client.FooBar.write('00112233-4455-6677-8899-aabbccddeeff', {...data, bar2: 'newValue'});
Model properties
In most cases you just need to instanciate your model using:
const model = new Model('<Name>'); // Maps R/W operations to Prefix: <Bucket>/<Name>
In more complex cases you can extend the model with the following optional properties:
| Property | Description |
|---|---|
name |
The model name alternative to using new Model('<Name>') |
parent |
References the associated Model (nested relationship) |
fields |
Defines supported root-level Object keys in R/W operations |
type |
Supported types (base64, binary, json , text) |
Image example
// Construct the model.
const modelImage = new Model('image');
modelImage.type = 'binary';
..
// Read image into TypedArray
const data = Uint8Array.from(
Buffer.from(
fs.readFileSync('path/to/example.jpg')
)
);
await client.Image.write('example.jpg', data);
const buffer = await client.Image.fetch('example.jpg');
// Convert result to Base64 URL
buffer.toString('base64url');
For the impatient (TO DO)
While batch processing works in a perfect environment a deadlock can occur anytime an exception is thrown in-between client operations. For example, a network failure may result with an incomplete operation that results with an <Object>.lock that prevents write operations for new client instances (e.g. threaded application).
I'm currently evaluating several solutions on how to handle this (the "Work in Progress"). If you don't care about write integrity in a multi-user or threaded environment all other client methods work as expected.
Developers
CLI options
Run ESLint on project sources:
$ npm run lint
Run Mocha unit tests:
$ npm run test
Run the example as a single process:
$ npm run example-single
Run the example concurrently:
$ npm run example-parallel
References
- Amazon Simple Storage Service quotas
- Organizing objects using prefixes
- Record locking (Pessimistic control)
Versioning
This package is maintained under the Semantic Versioning guidelines.
License and Warranty
This package is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.
s3-is-not-a-db is provided under the terms of the MIT license
Amazon S3 is a registered trademark of Amazon Web Services, Inc.