Schema Management (Migrating Data to new revisions)

Aug 30, 2013 at 1:48 PM
Cool Project!

I just cannot seem to find any discussion around refactoring code, and how the data stored in DBreeze gets migrated to/from the old version to the new version of the application. Can someone point me in the right direction; or is the use case here that we write custom code (meaning without a framework) to morph it on our own for each major/minor/revision upgrade of our app?

Either way, can someone provide said example(s) of what the DBreeze team proposes for this scenario?

Aug 30, 2013 at 2:05 PM
We don't discuss the code for now, we want that all major/minor bugs of this project would come over our team.
At least for now.

There is no migration data necessity, cause the data files structure didn't change starting from the first release and, actually, we don't plan the change.
So, DBreeze born data files are compatible from version to version.
Aug 30, 2013 at 2:14 PM
Apologies if I wasn't clear.

If I have... say an object:
public class MyObject
public int Id{get;set;}
public int Name{get;set;}
public int Code {get;set;}
public string Category{get;set;}
and for some reason, I need to refactor this to:
public class MyObject
public int Id{get;set;}
public int Name{get;set;}
public int Code {get;set;}
public int CategoryId {get;set;}
public class Category
public int Id{get;set;}
public string Name{get;set;}
How do I morph the data in the database from Rev1 (single class) to Rev2 (my object + category classes)

The way we handle with SQLite is to have a custom script that creates the category table, populates it, adds a temporary field (CategoryId), updates that with the id from category, then drops category field from myobject table. We do all of this using Fluent Migrator (

Any advice? It'd be the one concern I have before I start evaluating this as an alternative path to go on... need a better embedded db than using SQLite for apps, and this looks like it, since it seems to like Mono and Linux as well as the Win32 and Win64 platforms.
Aug 30, 2013 at 3:44 PM
Edited Aug 30, 2013 at 3:45 PM
DBreeze stores only byte[] for keys and values and knows nothing about higher level of data abstraction.
The architecture of DBreeze lets to hold in one table under different keys different value types and even nested tables, even many nested tables per row.
DBreeze knows nothing about this structures. The morphing (or migration) process must be done manually.
Programmer knows DB structure and can write correct migration algorithm.

One SQLite database can be used by many concurrent OS processes, DBreeze can be used only by one OS process in a time.
It's more a server solution or solution for one application instance.

Under MONO/Linux (tested on openSUSE, Ubuntu) DBreeze works.
Aug 30, 2013 at 6:01 PM
So, it sounds like the answer to my question is that DBreeze has no migration strategy as part of the tooling, and we'll still have to maintain some sort of a data "upgrade" tool to bring the data (based on the example I gave) from the denormalized form to a normalized form (splitting Category into its own document store/table structure) same as we do now for Postgres/SQLite, and we'll have to have some sort of document store/table that holds what's been done. Am I correct on this?

From what I've digested, DBreeze is a better alternative than SQLite, which is why it's peeked my interest. I need to have it as a backing store for a multi-user application that the data is served via web services, for lack of a better way of describing it.
Aug 31, 2013 at 10:05 AM
Edited Sep 1, 2013 at 10:06 AM
... Am I correct on this?
Yes, there is no specific tool - full freedom.
If you forget to create a secondary index, you will have to make it via your own scripts separately.
Isn't it beautiful? :)

From the other side, conserning existing objects we use some specific techniques.
If stored in a row object is serialized, then - no problems, in the future we can add some more properties in this object and deserialized old data just will get the default value for the absent properties.
If stored object is represented as a byte[], then we use first byte to setup a row version, so that we can distinguish the stored data structure due to the version number.

We use second approach very often for the high density data. JSON and XML can't be compared (by size and processing speed) with the simple byte[] represented as a set of virtual columns for different digits.