I’ve been working on migrating an application using MySQL to using Neo4j, a graph database for the few past months.

The new application uses Neo4j as its primary data store. The application accesses the Neo4j database through the Spring Data for Neo4j framework (SDN). SDN is built on top of Spring and Spring Data and adds a higher level API on top Neo4j’s Java API. Most importantly it allows applications to map Neo4j entities to Java types (e.g. User, Job, etc.) rather than Neo4j’s Node or Relationship types, which are more or less just collections of key/value pairs (ProperyContainer).

As part of the development effort I need to migrate data from MySQL to Neo4j. It’s roughly 500 million rows of data in 4 InnoDB tables in MySQL.

BatchInserter is Neo4j’s solution to bulk load data into a Neo4j database. It speeds up bulk loading data into neo4j by bypassing transactions among other things. However, BatchInserter is not designed to work with SDN out-of-the-box.

As mentioned above, among other things, SDN acts as a mapping layer between Java types and Neo4j entities. At the data level it can do this in a few different ways using its TypeRepresentationStrategy implementations. The default implementation is a IndexingNodeTypeRepresentationStrategy (for node entities) and IndexingRelationshipTypeRepresentationStrategy (for relationship entities), which add a __type__ property on each entity, and creates a __types__ node index and __rel_types__ relationship index. BatchInserter will not create these properties or indexes on its own.

With some help from Michael Hunger, a Neo4j engineer primarily responsible for SDN development, I implemented a simple application using BatchInserter to populate a Neo4j database with the __type__ properties and __types__ and __rel_types__ indexes created as they should.

The important bits are the pieces of code that add the “__type__” property and the nodeTypeIndex and relTypeIndex manipulation. They deal with the SDN TypeRepresentationStrategy required properties and indexes.

Here’s a simplified example on how I implemented it:

BatchInserter routine:

RelationshipType implementation:

Utility functions:

 

Tagged with →  
Share →

7 Responses to Neo4j BatchInserter and Spring Data for Neo4j

  1. Hi Tero,

    thanks a lot for the article. It would be great if you could share the code as a project for instance on github (including the mysql connection to pull the data).

    Something that could reduce the size of your graph is to use @TypeAlias(“xx”) on top of your SDN classes, so that the __type__ property could be reduced to a few characters instead of the FQN.

    Could you also post some information about the runtime of your insertion?

    • tpp says:

      Hi Michael,

      Good tip on TypeAlias.

      The example I posted on this article is simplified quite a bit. I just wanted to demonstrate the SDN “integration” without all the gunk I had in the actual migration application.

      The initial data migration finished earlier today, actually. It ended up being 90M nodes and 240M relationships. Database size (incl. Lucene indexes) was 197GB. I still have to do incremental migrations of data that has changed since I started the imports. It’s going to be about 10% of the data, if my estimations are accurate.

      There were three primary issues complicating the migration process:

      1. The MySQL database that was the source of the data is hopelessly over the capacity of the servers it’s running on (which is one of the reasons we’re moving the data out). We’re using master/slave replication with three slaves, one of which is dedicated to my data migration apps. There was one particular table that was performing so poorly a straightforward data export was going to take about two weeks or more. It was taking 2 to 5 minutes to get 50,000 records at a time from that table. I tried everything I could think of trying to optimize the queries against that table, but nothing helped. So what I did instead was parallel process the data export. I had four threads running simultaneously taking data out of MySQL and dumping into a separate Neo4j database for each batch of 10,000 primary entities (primary entity + a object graph of dependent data…roughly 50K to 100K actual rows per batch). That created almost 20,000 small neo4j databases of 5MB to 20MB each. These small neo4j databases did not have the SDN __type__ properties or Lucene indexes, but they had the proper graph structure, and all the node and relationship properties fully populated. I then ran a script that read the data from the 20,000 databases and combined them into a single database that will eventually become the production database. I saved about 50% to 60% of total runtime doing this.

      2. I have a fair number of unique entities in this database, so the migration scripts needed to do quite a few index lookups to make sure I wasn’t creating duplicate entities. This slowed down the process a lot as well. I also noticed that Lucene lookups would degrade faster as the database size grew. I created a Mongo implementation of the BatchIndex and used that for the uniqueness lookups. Mongo based index lookups seemed to have a more consistent performance characteristics than Lucene based index lookups.

      3. I didn’t size the server performing the migrations correctly. I only had 16GB of RAM on it. That was adequate until about half way through the import. After that I was consuming more than 16GB of memory during the BatchIndex flush and BatchInserter shutdown operations. It slowed it down a LOT. If I were to do this again from scratch, I’d get a server with 64GB of memory and faster disks.

      I might post another blog post about all that at a later date.

      • Tero,

        a blog post about the things you learned and perhaps the source code of the Mongo based index (it would also be interesting to look into redis or memcached for that) would be really great.

        I’m just thinking about providing a tool like this as part of SDN. It would use the meta-information of the entities (MappinContext) to correctly control the batch-inserter, it should be possible then to stream iterators of entities (sub-graphs) to the batch inserter.

        Did you ever look into accessing mysql not through sql but through those lower level KV-based APIs for the export? We had many customers whose mysql export took several orders of magnitude longer than the import into Neo4j.

        Thanks

        Michael

  2. Sanjay Dalal says:

    Thanks guys. What should be done for index with unique constraint and a spatial index?

    Michael, Do you know if there any plan to support batch functionality in SDN Neo4j?

  3. KendallV says:

    This is awesome – thanks a bunch!

    Initially had troubles using short class names (@TypeAlias did not behave exactly as I had expected). Using

    String nameOf(Class type){
        return type.getSimpleName();
    }

    for __type__ and className values solved that. I expect that’s what Michael intended in his comment above. Interestingly it was not required for __type__  – it just looks better/takes less space.

    Could also get the TypeAlias directly from the class with eg this, but that seems to circumvent the point and add unnecessary complexity to a batch operation.

     

  4. chniemz says:

    Shouldn’t it be

    relTypeIndex.add(relationship, MapUtil.map(“className”, MyRelationship.getClass().getName()));

    instead of

    relTypeIndex.add(node, MapUtil.map(“className”, MyRelationship.getClass().getName()));

  5. Hi Michael,
    following Tero’s post I implemented something similar, but with the main difference that indexes are created automatically based on the Spring Data annotations in the classes. Similarly also the properties for nodes and entities are extracted automatically based on the annotated properties in the corresponding POJOs.
    I’d be very happy to contribute if you plan to have such tool as part of SDN.

    Thanks,
    Roberto

Leave a Reply to Roberto Mirizzi Cancel reply

Your email address will not be published. Required fields are marked *