Address Search OS OpenNames with PostGIS, SQLAlchemy and Python – PART 2

Part 1 of this post outlined how to configure a PostGIS database to allow us to run Full Text searches against the OS OpenNames dataset.

In Part 2 we look at writing a simple Python 3 CLI app that will show you how easy it is to integrate this powerful functionality into your apps and APIs.  Other than Python the only dependency we need is the  SQLAlchemy ORM to let our app communicate with Postgres.

address-search

Installing SQLAlchemy

SQLAlchemy can be installed using pip.  It is dependent on psycopg2, which you may struggle to install on Mac without Postgres present, which is frustrating (however solutions can be found on Stack Overflow)

A simple address search CLI


import argparse
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.dialects.postgresql import TSVECTOR
from sqlalchemy.orm import sessionmaker
# Create DB Session
engine = create_engine('postgresql://iain:password@localhost:5432/Real-World')
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base()
class OpenNames(Base):
__tablename__ = 'open_names'
# Map DB columns we're interested in
ogc_fid = Column(Integer, primary_key=True)
text = Column(String)
textsearchable = Column(TSVECTOR)
def search_address(self, search_for: str):
print(search_for)
or_search = search_for.replace(' ', ' | ') # Append OR operator to every word searched
results = session.query(OpenNames.text).filter(OpenNames.textsearchable.match(or_search, postgresql_reconfig='english'))
for result in results:
print(result.text)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('address', help='Address you want to search for')
args = parser.parse_args()
open_names = OpenNames()
open_names.search_address(args.address)

Let me draw your attention to…

Hopefully this script is fairly easy to follow, but there are a couple of lines to draw your attention to

  • Line 4 – Note we have to tell SQLAlchemy we’re using the Postgres dialect so it understands TSVECTOR
  • Lines 8 – 12 is simply SQLAlchemy boiler plate that sets up our connection and session for the app.  You’ll need to swap out the connection details for your own
  • Lines 17-20 I’ve chosen to map only 3 columns, you’ll probably want to map more.
  • Line 25 – is very important, here we append the OR operator to every word the user has supplied, meaning we’re returning addresses.  You could extend this to allow the user to specify on exact match operator and change this to an & search.
  •  Line 26 – Finally note we ask SQLAlchemy to match our search, and importantly we must supply the postgresql_reconfig param to say we’re searching in English.  This is vital or you wont get the matches you expect.

Running our app

We can run our app from the command line simply by entering the following command

python address_search.py 'forth street'

And we see our app print out all matching addresses that contain either Forth or Street 🙂

Ends

Hopefully you can see how easy it would be take the above code and integrate it into your apps and APIs.  I hope you’ve found these tutorials useful.  Happy text searching.

Advertisement

Address Search OS OpenNames with PostGIS, SQLAlchemy and Python – PART 1

In this two part post we’ll look at implementing an address search using the Ordnance Survey Open Names dataset.  We’ll use the power of Postgres with the PostGIS extension leveraging it’s built in Full Text Search, and use Python and the SQLAlchemy ORM to create a simple CLI.

address-search

Part 1 – Data Load and DB Config

Address Data

The UK is very badly served for free address data.  The best we have is the Ordnance Survey OpenNames dataset.  It will work as a Postcode lookup or a street finder (at a push), but the dataset would require a lot of additional processing to be a useful address search.  OS really want you to purchase AddressBase

That said, OpenNames will suffice for this example and it should be easy to extend the example to a fuller dataset if you’re lucky enough to have one.

Loading Data to PostGIS

You can download OpenNames as either CSV, or GML.  I’d recommend GML as it’s simpler to load it into PostGIS using OGR2OGR.

Once you unzip the archive you’ll see that the files are referenced according to the British National Grid, so you can load as much or as little as you want.

We’ll load NS68 which contains addresses in my home town of Stirling, as follows (swap out the values for your db):

ogr2ogr -f PostgreSQL PG:"host=localhost dbname=Real-World port=5432 user=iain password=password" NS68.gml -progress -nln open_names --config PG_USE_COPY YES 

You should now have a new table called open_names containing the addressing info.

Note if you want to load more gml files just use the -append flag:

ogr2ogr -f PostgreSQL PG:"host=localhost dbname=Real-World port=5432 user=iain password=password" NS88.gml -append -progress -nln open_names --config PG_USE_COPY YES 

Setting up Full Text Search

We now have our open_names table, but no text search column.  So we can add a textsearchable column which must be of type TSVECTOR as follows:

ALTER TABLE open_names ADD COLUMN textsearchable TSVECTOR;

We can populate the column by using the built in function TO_TSVECTOR, this tokenises the words based on the supplied config, in our case english.  However, multiple configs are supported.

UPDATE open_names SET textsearchable = TO_TSVECTOR('english', text || ' ' || localid);

If you look at the data in your new column you’ll see that it now contains text tokens representing the address data.

Increase accuracy by concatenating multiple columns

Note that we’re concatenating 2 columns together in this update statement – text and localid.  In our case the reason for doing this is that the postcode in the localid column is stored without a space, meaning our search will return a result if the user enters a postcode without a space.

However, it should be clear if we had better address data, we could concat multiple columns.  Meaning if a user searched for “1 Main St, Stirling, FK3 4GG” we would be able to return an accurate match.

Add an Index for faster searching

Now that we have data set up we can add an index to our new column which will ensure searches are fast:

CREATE INDEX textsearch_idx ON open_names USING GIN (textsearchable);

Let’s do some searches

Now lets query our new column to see if we can find some matches using the TO_TSQUERY function

SELECT COUNT(1) FROM open_names WHERE textsearchable @@ TO_TSQUERY('english', 'avenue')

Here we find we have 41 streets in Stirling area containing the word avenue.  You’ll note that I don’t need to worry about lowercase, uppercase or where the word might appear in the string.  Full text search takes care of that for me 🙂

The @@ operator basically means that the query matches the tsvector column.

Using AND and OR for better matches

A very powerful feature of Postgres’ Full Text Search is the ability to find matches contain all or some of the words in the query using the AND & operator or the OR | operator, as these examples show:

select * from open_names where textsearchable @@ to_tsquery('english', 'forth & view');

Here we only return one result Forth View which contains both Forth and View, if we change this to an OR search:

select * from open_names where textsearchable @@ to_tsquery('english', 'forth | view')

We get 7 results including Forth View, Bruce View, Forth Place.

Again it should be easy to see how powerful text searches could be built for complex text documents.

A final note on Triggers

While our address data is fairly static, if you had a table where users were regularly editing address data, or any other columns you wanted to run a full text search on, you should consider adding a trigger to keep the TSVECTOR column up to date, as outlined here.

So for our example the trigger would look like:

CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
ON open_names FOR EACH ROW EXECUTE PROCEDURE
tsvector_update_trigger(textsearchable, 'pg_catalog.english', localid, text);

Up Next

Hopefully Part 1 has demonstrated how it is very easy to set up powerful text searching in Postgres.  In Part 2 we’ll look at how we can use Python and SQLAlchemy to allow you to integrate this functionality into your apps and APIs.

 

Avoiding pitfalls running Mongo 3.2 in Docker on OSX

Quick post after smashing my head off the desk for 5 hours trying to understand YTF I couldn’t get Mongo 3.2 running in a Docker container on my Mac.  Hopefully this post spares others the elastoplasts 🙂

logo-gray-1024x512-993867aa92

Pitfall #1 – Mongo cannot write data to the host OSX machine

There are three factors that cause this issue:

  1. A lot of mongo/docker docs tell you to mount a local volume
  2. By default docker-machine uses the virtualbox driver to create a local VM
  3. Mongo uses mmap to turbo-charge access to files on disk

These facts led me (by way of this issue) to the fact that there’s been an ancient known issue on VirtualBox that basically says docker ain’t going to be able to read/write files with mmap.

So it’s pointless trying to tell mongo, via docker, to mount a local data volume, as the above bug means mongo isn’t going to be able to access it.  (careful reading of the Docker Hub mongo docs may allow you to divine this, but it’s not at all obvious)

Solution – Create a data volume in the container rather than the host.

Like always once you understand the problem, the solution is simple.  We can tell docker to create a data volume in the container rather than the host, as follows:


docker run –name my-local-mongo -v mongo-data:/data/db -p 27017:27017 -d mongo

view raw

mongo-dock.sh

hosted with ❤ by GitHub

In my (still) early days with docker I’d come across these cli commands which contained absolutely no description of WTF all the switches are doing, so a quick breakdown:

  • –name my-local-mongo – This tells docker to give the container the name “my-local-mongo” rather than generating a name like tuftybeaver
  • -v mongo-data:/data/db – This is the key switch, here we tell docker to create a new data volume called mongo-data and mount it as /data/db which is the default location mongo wants to write data to
  • -p 27017:27017 – Here we tell docker to bind the container port 27017 to port 27017 on the host vm, allowing us to access docker locally.
  • -d – This switch tells docker to run the process detached so we’re not running interactively.

If all has gone well docker should be running your mongo container successfully.  A quick docker ps will show your running container.

docker logs -f my-local-mongo – will allow you to follow the log files

Pitfall #2 – I can’t connect to mongo from localhost

This is the classic docker school boy error, that everyone makes.  The first time you do this you’ll be patting yourself on the back as you type localhost:27017 into your connection string and then wonder why the hell docker isn’t found.

This is because everything is actually running on your local docker virtual machine.  To find the IP of the VM enter docker-machine ls

You should see the URL listed something like this tcp://192.168.99.100:2376 You need to snarf this IP for your connection string.  The easiest way to sanity check that’s it all working is bash it into your browser, eg for this example:

http://192.168.99.100:27017/

The mongo database should helpfully respond with:

It looks like you are trying to access MongoDB over HTTP on the native driver port.

Demonstrating that your mongo db is running and responding to idiots trying to speak to it over HTTP 🙂

Pitfall #3 – No collections are displayed in RoboMongo

This was a real head scratcher, I was pretty sure everything was working, and yet RoboMongo wasn’t showing a damn thing.  This was a classic case of me Sherlock Holmes style spending 2 hours eliminating the impossible, to find whatever remains, however improbable, must be the truth

So in a turn up for the books, there was no bug in my code!!  It turns out by default Mongo 3.2 uses the WiredTiger storage engine, and RoboMongo can’t parse data stored in this format.

So instead if you’re using Mongo 3.2 you should use MongoChef to connect to your db.  MongoChef will show you all your lovely collections, living within your containerised Mongo.

Hope the above helps, thanks 🙂