NoSQL Design for DynamoDB

Differences Between Relational Data Design and NoSQL

Relational database systems (RDBMS) and NoSQL databases have different strengths and weaknesses:

  • In RDBMS, data can be queried flexibly, but queries are relatively expensive and don’t scale well in high-traffic situations (see First Steps for Modeling Relational Data in DynamoDB).

  • In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow.

These differences make database design very different between the two systems:

  • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query optimization generally doesn’t affect schema design, but normalization is very important.

  • In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of your business use cases.

Two Key Concepts for NoSQL Design

NoSQL design requires a different mindset than RDBMS design. For an RDBMS, you can go ahead and create a normalized data model without thinking about access patterns. You can then extend it later when new questions and query requirements arise. You can organize each type of data into its own table.

NoSQL design is different:

  • For DynamoDB, by contrast, you shouldn’t start designing your schema until you know the questions it will need to answer. Understanding the business problems and the application use cases up front is essential.

  • You should maintain as few tables as possible in a DynamoDB application. Most well designed applications require only one table.

Approaching NoSQL Design

The first step in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy.

In particular, it is important to understand three fundamental properties of your application’s access patterns before you begin:

  • Data size: Knowing how much data will be stored and requested at one time will help determine the most effective way to partition the data.

  • Data shape: Instead of reshaping data when a query is processed (as an RDBMS system does), a NoSQL database organizes data so that its shape in the database corresponds with what will be queried. This is a key factor in increasing speed and scalability.

  • Data velocity: DynamoDB scales by increasing the number of physical partitions that are available to process queries, and by efficiently distributing data across those partitions. Knowing in advance what the peak query loads might be helps determine how to partition data to best use I/O capacity.

After you identify specific query requirements, you can organize data according to general principles that govern performance:

  • Keep related data together.   Research on routing-table optimization 20 years ago found that “locality of reference” was the single most important factor in speeding up response time: keeping related data together in one place. This is equally true in NoSQL systems today, where keeping related data in close proximity has a major impact on cost and performance. Instead of distributing related data items across multiple tables, you should keep related items in your NoSQL system as close together as possible.

    As a general rule, you should maintain as few tables as possible in a DynamoDB application. As emphasized earlier, most well designed applications require only one table, unless there is a specific reason for using multiple tables.

    Exceptions are cases where high-volume time series data are involved, or datasets that have very different access patterns—but these are exceptions. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application.

  • Use sort order.   Related items can be grouped together and queried efficiently if their key design causes them to sort together. This is an important NoSQL design strategy.

  • Distribute queries.   It is also important that a high volume of queries not be focused on one part of the database, where they can exceed I/O capacity. Instead, you should design data keys to distribute traffic evenly across partitions as much as possible, avoiding “hot spots.”

  • Use global secondary indexes.   By creating specific global secondary indexes, you can enable different queries than your main table can support, and that are still fast and relatively inexpensive.

These general principles translate into some common design patterns that you can use to model data efficiently in DynamoDB.

Why Married Women Are Using Two Last Names on Facebook

Why Married Women Are Using Two Last Names on Facebook

Scheuble refers to women like Weissman, who change how they introduce themselves depending on context, as “situational last-name users.” Even some famous double-surname users who ostensibly kept their maiden names to maintain professional recognizability after marriage, have been known to go by different names in different situations. Sandra Day O’Connor has been known to go by “O’Connor”, her husband’s last name in colloquial settings, while Hillary Rodham Clinton has generally gone just by “Clinton” in her campaigns for political office.

.. “Two last names without a hyphen is very much a trend right now,” Tate says, and adds that starting around 2012, women replacing their middle names with their maiden names has been “a big, big name-change trend,” particularly in the South. Both formats, she says, are particularly popular among highly educated women with established careers at the time of marriage. She ascribes that popularity to the fact that this format offers more flexibility than hyphenating. Hyphenated names are “like Krazy Glue,” she says, in that the names have to remain stuck together virtually everywhere: Women who hyphenate have to sign all legal documents with their hyphenated last name, and often feel more obligated to introduce themselves with both last names colloquially—and that, Tate says, can be “a mouthful.”

.. third-grade teacher Alyssa Postman Putzel (née Postman) has adopted what’s sometimes known as a “double-barreled” last name since getting married last summer. Both women appreciate the recognizability it affords them online and in professional contexts

.. While tech has contributed to the popularity of the unhyphenated double surname in the United States, tech is also in some ways standing in the way of it becoming a commonly accepted naming format. Scheuble says she’s come across legal and administrative forms designed to enter last names into databases that can only register a single or hyphenated surname. “You can put a hyphen, but you cannot put a space,” she says. Scheuble recently tried to help her niece who has an inherited double-barrel last name apply for grad school. “God knows where she ended up” within some schools’ administrative systems, Scheuble says. “It’s ridiculous that computer systems make decisions about people’s lives. But that’s what happens.”

 

 

Ways to think about machine learning

 

I think one could propose a whole list of unhelpful ways of talking about current developments in machine learning. For example:

  • Data is the new oil
  • Google and China (or Facebook, or Amazon, or BAT) have all the data
  • AI will take all the jobs
  • And, of course, saying AI itself.

More useful things to talk about, perhaps, might be:

  • Automation
  • Enabling technology layers
  • Relational databases.

.. Before relational databases appeared in the late 1970s, if you wanted your database to show you, say, ‘all customers who bought this product and live in this city’, that would generally need a custom engineering project. Databases were not built with structure such that any arbitrary cross-referenced query was an easy, routine thing to do. If you wanted to ask a question, someone would have to build it. Databases were record-keeping systems; relational databases turned them into business intelligence systems.

This changed what databases could be used for in important ways, and so created new use cases and new billion dollar companies. Relational databases gave us Oracle, but they also gave us SAP, and SAP and its peers gave us global just-in-time supply chains – they gave us Apple and Starbucks.

.. with each wave of automation, we imagine we’re creating something anthropomorphic or something with general intelligence. In the 1920s and 30s we imagined steel men walking around factories holding hammers, and in the 1950s we imagined humanoid robots walking around the kitchen doing the housework. We didn’t get robot servants – we got washing machines.

.. Washing machines are robots, but they’re not ‘intelligent’. They don’t know what water or clothes are. Moreover, they’re not general purpose even in the narrow domain of washing – you can’t put dishes in a washing machine, nor clothes in a dishwasher
.. machine learning lets us solve classes of problem that computers could not usefully address before, but each of those problems will require a different implementation, and different data, a different route to market, and often a different company.
.. Machine learning is not going to create HAL 9000 (at least, very few people in the field think that it will do so any time soon), but it’s also not useful to call it ‘just statistics’.
.. this might be rather like talking about SQL in 1980 – how do you get from explaining table joins to thinking about Salesforce.com? It’s all very well to say ‘this lets you ask these new kinds of questions‘, but it isn’t always very obvious what questions.
  1. .. Machine learning may well deliver better results for questions you’re already asking about data you already
  2. .. Machine learning lets you ask new questions of the data you already have. For example, a lawyer doing discovery might search for ‘angry’ emails, or ‘anxious’ or anomalous threads or clusters of documents, as well as doing keyword searches,
  3. .. machine learning opens up new data types to analysis – computers could not really read audio, images or video before and now, increasingly, that will be possible.

.. Within this, I find imaging much the most exciting. Computers have been able to process text and numbers for as long as we’ve had computers, but images (and video) have been mostly opaque.

.. Now they’ll be able to ‘see’ in the same sense as they can ‘read’. This means that image sensors (and microphones) become a whole new input mechanism – less a ‘camera’ than a new, powerful and flexible sensor that generates a stream of (potentially) machine-readable data.  All sorts of things will turn out to be computer vision problems that don’t look like computer vision problems today.

.. I met a company recently that supplies seats to the car industry, which has put a neural network on a cheap DSP chip with a cheap smartphone image sensor, to detect whether there’s a wrinkle in the fabric (we should expect all sorts of similar uses for machine learning in very small, cheap widgets, doing just one thing, as described here). It’s not useful to describe this as ‘artificial intelligence’: it’s automation of a task that could not previously be automated. A person had to look.

.. one of my colleagues suggested that machine learning will be able to do anything you could train a dog to do

..  Ng has suggested that ML will be able to do anything you could do in less than one second.

..  I prefer the metaphor that this gives you infinite interns, or, perhaps, infinite ten year olds. 

.. Five years ago, if you gave a computer a pile of photos, it couldn’t do much more than sort them by size. A ten year old could sort them into men and women, a fifteen year old into cool and uncool and an intern could say ‘this one’s really interesting’. Today, with ML, the computer will match the ten year old and perhaps the fifteen year old. It might never get to the intern. But what would you do if you had a million fifteen year olds to look at your data? What calls would you listen to, what images would you look at, and what file transfers or credit card payments would you inspect?

.. machine learning doesn’t have to match experts or decades of experience or judgement. We’re not automating experts. Rather, we’re asking ‘listen to all the phone calls and find the angry ones’. ‘Read all the emails and find the anxious ones’. ‘Look at a hundred thousand photos and find the cool (or at least weird) people’.

.. this is what automation always does;

  • Excel didn’t give us artificial accountants,
  • Photoshop and Indesign didn’t give us artificial graphic designers and indeed
  • steam engines didn’t give us artificial horses. ..

Rather, we automated one discrete task, at massive scale.

.. Where this metaphor breaks down (as all metaphors do) is in the sense that in some fields, machine learning can not just find things we can already recognize, but find things that humans can’t recognize, or find levels of pattern, inference or implication that no ten year old (or 50 year old) would recognize.

.. This is best seen Deepmind’s AlphaGo. AlphaGo doesn’t play Go the way the chess computers played chess – by analysing every possible tree of moves in sequence. Rather, it was given the rules and a board and left to try to work out strategies by itself, playing more games against itself than a human could do in many lifetimes. That is, this not so much a thousand interns as one intern that’s very very fast, and you give your intern 10 million images and they come back and say ‘it’s a funny thing, but when I looked at the third million images, this pattern really started coming out’.

.. what fields are narrow enough that we can tell an ML system the rules (or give it a score), but deep enough that looking at all of the data, as no human could ever do, might bring out new results?