Every person that reads newspapers, magazines or any other media of general interest has at least a basic idea of what Machine Learning is. And this is not only a fashion, Machine Learning is already part of our everyday life and will be much more in the future: from personalized advertisement on the Internet to robot dentists or autonomous cars, Machine Learning seems to be some kind of super power capable of everything.
But, what is Machine Learning really? It is mainly a set of statistical algorithms that, based on existing data, are capable of deriving insights out of them. These algorithms are basically divided into two families, supervised and unsupervised learning. In supervised learning, the objective is to perform some kind of prediction, such as, for example, if an e-mail message is spam or not (classification), how many beers will be sold next week in a supermarket (regression), etc. Unsupervised Learning, on the contrary, focuses on answering the question how are my cases divided in groups? What these algorithms do (each of them with their particularities) is to bring similar items as close as possible and keep items that differ from each other as far as possible.
Step 3: Kmeans in PostgreSQL in a nutshell
Functions written with PL/Python can be called like any other SQL function. As Python has endless libraries for Machine Learning, the integration is very simple. Moreover, apart from giving full support to Python, PL/Python also provides a set of convenience functions to run any parametrized query. So, executing Machine Learning algorithms can be a question of a couple of lines. Let’s take a look
CREATE OR replace FUNCTION kmeans(input_table text, columns text, clus_num int) RETURNS bytea AS
from pandas import DataFrame
from sklearn.cluster import KMeans
from cPickle import dumps
all_columns = “,”.join(columns)
if all_columns == “”:
all_columns = “*”
rv = plpy.execute(‘SELECT %s FROM %s;’ % (all_columns, plpy.quote_ident(input_table)))
frame = 
for i in rv:
df = DataFrame(frame).convert_objects(convert_numeric =True)
kmeans = KMeans(n_clusters=clus_num, random_state=0).fit(df._get_numeric_data())
$$ LANGUAGE plpythonu;
You’ve probably heard of machine learning and artificial intelligence, but are you sure you know what they…
I think one could propose a whole list of unhelpful ways of talking about current developments in machine learning. For example:
- Data is the new oil
- Google and China (or Facebook, or Amazon, or BAT) have all the data
- AI will take all the jobs
- And, of course, saying AI itself.
More useful things to talk about, perhaps, might be:
- Enabling technology layers
- Relational databases.
.. Before relational databases appeared in the late 1970s, if you wanted your database to show you, say, ‘all customers who bought this product and live in this city’, that would generally need a custom engineering project. Databases were not built with structure such that any arbitrary cross-referenced query was an easy, routine thing to do. If you wanted to ask a question, someone would have to build it. Databases were record-keeping systems; relational databases turned them into business intelligence systems.
This changed what databases could be used for in important ways, and so created new use cases and new billion dollar companies. Relational databases gave us Oracle, but they also gave us SAP, and SAP and its peers gave us global just-in-time supply chains – they gave us Apple and Starbucks.
.. with each wave of automation, we imagine we’re creating something anthropomorphic or something with general intelligence. In the 1920s and 30s we imagined steel men walking around factories holding hammers, and in the 1950s we imagined humanoid robots walking around the kitchen doing the housework. We didn’t get robot servants – we got washing machines... machine learning lets us solve classes of problem that computers could not usefully address before, but each of those problems will require a different implementation, and different data, a different route to market, and often a different company... Machine learning is not going to create HAL 9000 (at least, very few people in the field think that it will do so any time soon), but it’s also not useful to call it ‘just statistics’... this might be rather like talking about SQL in 1980 – how do you get from explaining table joins to thinking about Salesforce.com? It’s all very well to say ‘this lets you ask these new kinds of questions‘, but it isn’t always very obvious what questions.
- .. Machine learning may well deliver better results for questions you’re already asking about data you already
- .. Machine learning lets you ask new questions of the data you already have. For example, a lawyer doing discovery might search for ‘angry’ emails, or ‘anxious’ or anomalous threads or clusters of documents, as well as doing keyword searches,
- .. machine learning opens up new data types to analysis – computers could not really read audio, images or video before and now, increasingly, that will be possible.
.. Within this, I find imaging much the most exciting. Computers have been able to process text and numbers for as long as we’ve had computers, but images (and video) have been mostly opaque.
.. Now they’ll be able to ‘see’ in the same sense as they can ‘read’. This means that image sensors (and microphones) become a whole new input mechanism – less a ‘camera’ than a new, powerful and flexible sensor that generates a stream of (potentially) machine-readable data. All sorts of things will turn out to be computer vision problems that don’t look like computer vision problems today.
.. I met a company recently that supplies seats to the car industry, which has put a neural network on a cheap DSP chip with a cheap smartphone image sensor, to detect whether there’s a wrinkle in the fabric (we should expect all sorts of similar uses for machine learning in very small, cheap widgets, doing just one thing, as described here). It’s not useful to describe this as ‘artificial intelligence’: it’s automation of a task that could not previously be automated. A person had to look.
.. one of my colleagues suggested that machine learning will be able to do anything you could train a dog to do
.. Ng has suggested that ML will be able to do anything you could do in less than one second.
.. I prefer the metaphor that this gives you infinite interns, or, perhaps, infinite ten year olds.
.. Five years ago, if you gave a computer a pile of photos, it couldn’t do much more than sort them by size. A ten year old could sort them into men and women, a fifteen year old into cool and uncool and an intern could say ‘this one’s really interesting’. Today, with ML, the computer will match the ten year old and perhaps the fifteen year old. It might never get to the intern. But what would you do if you had a million fifteen year olds to look at your data? What calls would you listen to, what images would you look at, and what file transfers or credit card payments would you inspect?
.. machine learning doesn’t have to match experts or decades of experience or judgement. We’re not automating experts. Rather, we’re asking ‘listen to all the phone calls and find the angry ones’. ‘Read all the emails and find the anxious ones’. ‘Look at a hundred thousand photos and find the cool (or at least weird) people’.
.. this is what automation always does;
- Excel didn’t give us artificial accountants,
- Photoshop and Indesign didn’t give us artificial graphic designers and indeed
- steam engines didn’t give us artificial horses. ..
Rather, we automated one discrete task, at massive scale.
.. Where this metaphor breaks down (as all metaphors do) is in the sense that in some fields, machine learning can not just find things we can already recognize, but find things that humans can’t recognize, or find levels of pattern, inference or implication that no ten year old (or 50 year old) would recognize.
.. This is best seen Deepmind’s AlphaGo. AlphaGo doesn’t play Go the way the chess computers played chess – by analysing every possible tree of moves in sequence. Rather, it was given the rules and a board and left to try to work out strategies by itself, playing more games against itself than a human could do in many lifetimes. That is, this not so much a thousand interns as one intern that’s very very fast, and you give your intern 10 million images and they come back and say ‘it’s a funny thing, but when I looked at the third million images, this pattern really started coming out’.
.. what fields are narrow enough that we can tell an ML system the rules (or give it a score), but deep enough that looking at all of the data, as no human could ever do, might bring out new results?
The bad industries ones will be transformed before the good ones. What I mean by that is that computer vision applied to medical imaging would be huge. But the detection/classification isn’t accurate enough for that field, just yet. Yes, results are amazing on standard datasets such as ImageNet but they fail to become equally good when there are orders of magnitudes less amount of data. And in the field, accuracy is very important, a net classifying cancer correctly 90 % of the time is likely useless.
One exception is automated language translation which is getting very good. I’m noticing that some of the articles papers I’m reading are machine translated. They appear to apply machine translation to English articles and then have some editor doing manual touch-ups which seldom is enough.
The “bad” industries such as spam and SEO can definitely benefit from ML as it exists today. There are ML algorithms (LSTM) that can generate faked web sites with images that, from Googlebot’s point of view, are completely indistinguishable from real sites. Another use would be to generate realistic looking accounts in social media to steer the conversation, perhaps for political purposes. Porn obviously, could also use ML due to the huge amount of data (the porn itself and user interactions) available.
.. I think it’s pretty safe to say finance will be a big one. Finance has a large amount of individuals and firms researching the applications of ML methodologies to financial indicators. With the semi-recent rise of quant firms, I think this research is only going to get more aggressive, and HFT will become more lucrative and more automated as long as regulation does not get in the way.
.. HFT – yes. But longer-term investment (i.e. Buffett – or even with a horizon of a couple of years) is unlikely to be transformed soon – ML needs vast historical data, which is very slow to generate. Waiting 10 years only gives you 10 years of history, which is 5 non-overlapping 2-year forward returns, and maybe 1 or 2 economic/financial regimes.
This is also a problem with new datasets being generated – there is not nearly enough history available to test them or feed them to a ML system.
Furthermore, arguably, longer-term investment requires forward-looking modelling of scenarios, based on the kinds of inputs that were not seen in history. ML is not very applicable when you get big covariate shifts.
So I would say human financial analysts are not going anywhere, and any improvements would be relatively small and incremental... HFT is not profitable. Its completely commoditized.