Wednesday, September 14, 2011

Back with a bang!

So the last post was in 2009 and I’m hoping the next addition to this blog will happen sooner than 2013… So what has been going on in databaseland?

New name – Trunk

The main project has had a name change – AudioDB was too well media centric and as a database it’s name should reflect it’s generalised nature – the new name – at least as far as codenames go is now “Trunk” – I think this reflects the true nature of the application project much better!

CCR Out – C# Async In!

Well the codebase has indeed been ported to .NET 4 – no surprise there however the big shocker is that reliance on Microsoft Concurrency and Coordination Runtime (aka CCR) has ceased!

The database source has been rewritten to use the new asynchronous support (at time of writing still in CTP state) but which will be putting in an appearance in the next version of .NET framework – that’s v4.5 or v5 or something like it…

The performance is equal to that of the CCR and being a language feature it has a much more natural syntax and therefore makes it much easier to write asynchronous code in a clean and logical manner.

Read more here

Refactoring The Beast

The amount of work needed to rip out the CCR was huge so while the code was lying in pieces I took the opportunity to plug-in the Enterprise Library v5 and bring in an IoC container – Unity. The code already made use of the standard .NET component IoC pattern – overriding GetService to allow derived classes and other containers the opportunity to supply service implementations and in many ways this is still preferred for certain services since the database and its internal classes are so hierarchical in nature. However for some objects using Unity makes more sense – obtaining the global caching buffer device for example.

So all in all I broke the codebase (causing upwards of 1000 syntax errors) and slowly refactored and hooked it all up again.

Trunk SQL

So one of the reasons that development slowed down was due to the fact that to create databases and tables using the codebase required the writing of an awful lot of code – you know – create message, post message, check result, blah blah blah – it was in short a real pain – even for test purposes! So now the Trunk solution is getting its very own SQL grammar. This is a major undertaking but like so many things in the Trunk project – it is one that will grow slowly from a core set of functionality.

Now for those of you who have experience of parsers – the thought of writing a SQL parser would fill you with dread – no surprise – SQL is a large grammar and has a number of quirks that make processing it a challenging prospect – thankfully much of the heavy lifting has been taken care of by a lexer/parser generator tool called ANTLR.

This tool allows me to concentrate on defining the grammar without having to actually write the lexer (the thing that tokenises the input text) or the parser (the thing that converts tokens into larger blocks) hell I don’t even have to write the standard code to walk over the parse tree and build actual actions – really ANTLR is a piece of work – check information about the C# port of it here

Currently Trunk-SQL supports the following commands;

  • CREATE DATABASE
  • USE DATABASE
  • CREATE TABLE

Not exactly setting the world on fire as yet but these are early days and one must learn to walk before one goes unicycling…

So with SQL in-place (albeit a severely stripped down version of it) I can now test the database creation code much more easily – from single file-groups to multiple file-groups each with multiple devices – this will go a long way to making the code more robust!

As it happens the database creation code is looking rather good – data and log devices are both looking good enough to move attention to the CREATE TABLE statement.

Testing and debugging this will be a lengthy affair – there a number of data-types and constraints to deal with together will making sure that tables with a large number of columns also work as expected. In this version of the database, an individual row will not be allowed to be more than around 8000 bytes as the entire row must be able to fit into a data page.

After this work has been completed next up will be inserting data swiftly followed by indexing. The Table Index Manager already exists in code but as yet has not been exercised or tested at all.

With that in-place the task of SELECT/UPDATE/DELETE will be attacked – this will undoubtedly require yet another piece of serious programming – a query optimiser looks like a nasty piece of rules-based programming to me…

Monday, May 04, 2009

What a difference two years makes

Preamble

When I started this project some 4 years ago I really didn’t think I would still be writing it but here I am… To be fair much has happened in the past two years that has kept me away from this creation – I’ve changed country, lifestyle and even computer systems!

Not only that but the .NET framework has evolved from .NET 1.1 when I started this beast into .NET 3.5 SP1. I imagine that the code will be ported to .NET 4.0 when that sees the light of day in due course too…

Following the last post, the low-level entities were looked at in great detail – so much so that it took some three months to fix the problems this “look” caused. However it was all worthwhile – the low-level memory management, sparse file and overlapped I/O logic has been successfully used in a separate project (a multi-request HTTP file downloader) to great effect. The resultant system was very fast, resilient and efficient!

The asynchronous code was successfully ported from the .NET APM (that’s Asynchronous Processing Model to you) to the Microsoft CCR which resulted in code that was much easier to test, maintain and extend. Licensing issues mean that I may yet change the underlying concurrency framework used but for the moment CCR is king!

So where are we now?

State = current

Well the index implementation was always half-complete – as in index entries can be added but not removed and if that wasn’t enough – some of the index models have not been implemented (such as clustered indexing)

So the first stage is creating a suitable test rig for playing with paged index trees and writing the necessary code to support adding/removing entries and hopefully balancing the trees too!

With working indices we then get finalised table handling and then I will at long last be able to look at the next phase – hosting a table-driven neural network (actually a page-based neural network would be rather nice and might come first)

C# Port to 3.5

The project has long since been migrated to .NET 3.5 and now it is running on 3.5SP1 but the language features are only being taken advantage of with new code – I need to revisit all code classes to ensure the best use of the language features are being made use of – in particular – more use of LINQ – instead of explicit loop constructs. This work does have a purpose – Parallel LINQ is already here and could form an alternative to using the CCR in certain cases plus there is much talk of merging the codebase of CCR with that of Parallel LINQ – so having CCR code and LINQ code in place makes the next transition much easier to make – when I am called upon to do so, and they say future proofing is impossible…

Code expose

Yes I may begin to expose some of the miles and miles of source code I have painstakingly put together for this project – however I will not be releasing the software into Code Project / CodePlex or any other open-source repository – it’s taken too much of my time to give freely!

Anyway – that’s enough for now – this post was really about catch-up! Now it’s time for sleep!

Friday, May 25, 2007

The Final Shove... I mean - push!

Okay the integration of the new memory manager and low-level streaming improvements is now almost finished - as you can imagine - if you change a set of low-level objects to a new set that function completely differently it may take a while to wade through all the error messages...

If I can find the time then I'll be able to commence testing early next week - if I don't find the time then I may well end up repeating myself next week/next month etc...

Monday, May 07, 2007

Low-Level Engineering

Well I've been so busy with so many other projects that I've not had the time to devote to the Audio Database of late!

All has not been lost and sometimes a break from a project means you can look at it with fresh eyes when you return - assuming of course that you do infact return at all...

My fresh eyes have been taking a critical look at the low-level file and buffer handling and I have implemented three rather important features which have interestingly enough led to a fourth rather radical change in the architecture. These three features are listed as follows (in no particular order);

  • Sparse Files
  • Overlapped I/O
  • Scatter/Gather I/O
  • Memory Allocation

Sparse Files are a feature of NTFS 5.1 which allow big files that contain mostly zeros to occupy only the space needed for the non-zero portions - clearly a feature every database file should be making use of!

Overlapped I/O is a system of reading and writing files that allows for asynchronous data-transfer. Without Overlapped I/O the implementation of the buffer cache manager would be much more difficult! This is because the cache manager takes care of reading and writing data and does so using a rather elegant algorithm.

Scatter/Gather file I/O is a method of reading and writing files that allows seperate non-contiguous buffers to be read and written to using overlapped I/O - this technology is important as both the Read Ahead manager and the cache manager use this technique to speed up data-transfer to and from the underlying file. This I/O technology places a number of demands on the caller however these requirements are easily dealt with and in almost all cases exactly what is required for a DBMS file system.

Memory Allocation had to change in order to properly support scatter/gather I/O and lead to an improvement in the way buffers are allocated and managed. To support the scatter/gather logic buffers must be sized according to the system page size which for most Win32 systems is 4096 bytes and the buffers must also be aligned on a page boundary. To satisfy the first requirement is simple - the second requirement however is surprisingly tricky. The other tricky aspect is dealing with .NET as the buffers need to be pinned and passed to the scatter/gather IO wrapped in yet another structure! As it turns out the solution involves turning the memory allocation scheme on it's head!

The Memory Manager

The Windows Virtual Memory APIs have been around for ages but one of the things they give you is page-aligned memory. One of the other things they give you is the ability to reserve blocks of memory. To implement the scatter/gather support both features are used - now a managed virtual memory manager takes care of buffer allocation by using the virtual memory functions to reserve the space needed for the buffer pool. The manager also tracks allocated buffers by maintaining a linked list of allocated buffers. Memory is only allocated when the buffer instance is requested and the requested buffer is taken from the reserved address space - hence the system can reserve say 32Mb of memory for data pages (that's 8192 pages of 8192 bytes incidentally) but the actual memory consumed is determined by the actual buffers currently in use. Nice!

The Advanced File Stream

Bringing the sparse files, overlapped I/O, scatter/gather I/O and virtual memory buffers together under one so-called roof is done with a new managed class derived directly from System.Stream. Unfortunately I had to derived directly from System.Stream rather than the more obvious System.FileStream because the later does not allow the creation of unbuffered streams or write-cache disabled streams (both requirements for using scatter/gather I/O) thankfully much of the code can be lifted directly from System.FileStream (I love Reflector) with the only changes being a changed set of constructors since we can only use scatter/gather on overlapped files and several of the other options are also fixed which simplifies things somewhat.

To support scatter/gather four new methods are added to the stream code;

  • BeginReadScatter / EndReadScatter
  • BeginWriteGather / EndWriteGather

No synchronous methods are provided - although these would simply call their asynchronous counterparts in any case.

The begin methods take the usual asynchronous method parameters of a callback object and a state object in addition to an array of virtual buffer objects that indicate the memory blocks to be persisted.

Buffer Changes

To integrate these changes into the existing database framework I have needed to make some rather drastic changes to the Buffer class used by the page classes for their internal persistence. Up until now the Buffer classes have been in control of their own loading and saving however this cannot continue - the loading and saving (possibly of multiple buffers) must now be controlled by an external object - this may well wind up being a scatter/gather helper object rather than the read/write request handler directly - the idea being that buffers and page-ids can be added to this mystical helper and when contiguous runs are detected then these can be cached for a single overlapped operation.

While I am breaking the internal buffer implementation it's probably the right time to look at splitting the implementation of transacted and non-transacted buffers into seperate classes - it's confusing enough as it is!

Conclusion

These changes will take a while to implement however the effort will be well worth it - the overall performance increment both from a memory usage viewpoint and outright I/O performance viewpoint will be staggering. The use of sparse file technology will optimise the disk space usage too!

Thursday, December 07, 2006

CCR + DSSP = Distributed Scalable Buffer Devices

Technorati tags: , ,
After much fiddling around and moving of code I've finally got to a point where I can test the implementation of a feature I call distributed buffer device service. Now before I go on to describe what this you might want to read about the CCR and DSSP sub-systems upon which all the service implementations are based...
Okay so where was I? Ah yes, the past two months I investigated whether it was possible (and desirable) to write parts of the database engine as DSSP services.
I had already integrated and converted the code-base to utilise the CCR framework rather than using the difficult to code/debug Asynchronous Programming Model - shame really since I'd become rather good at writing those wrappers!
So this investigative process was really a continuation of existing work. The initial implementation of a Physical Buffer Service was simple enough and even compiled and built without too much hassle however the Container Buffer Service (which is regarded as the minimum service needed to perform useful testing) ran into difficult and damn obscure issues - all related to the generation of the proxy project code.
I have finally (after much hair pulling - very painful considering I've no hair on my head) got this Container Buffer Service to compile and the proxy service to build! Wow!
So what's the point of all this abstraction? Well DSSP allows services to communicate with each other using HTTP and hence each service need not exist on a single machine - now since our services are now DSSP services then we automatically get distributed physical device services - now we didn't have THAT before so this must be considered "progress"...
Now the test harness for these services is actually an NT service - I call this service the "Block File-System Service" and this could well form the underpinnings to the Audio Database service.
The implications of all this is the fact that the database file-group devices will maintain a one-to-many relationship with FileSystem service instances running on potentially multiple machines - sounds super uber scalable to me...
Once I have Container Buffer Services working - it will be time to look at the caching version. Note the caching implementation will provide caching at both ends of the network connection to increase networking performance.

Tuesday, November 07, 2006

Concurrent Pains in my Brain

Long time no post means the new messiah that is the encapsulated within the Microsoft Robotics Toolkit is proving a right devil to implement!

Right now it has caused the creation of four more projects to the overall solution and no end of changes to the code framework!

The most important change is the adoption of DSS (aka Distributed Soap Services) and all devices are being rewritten to take advantage of this concept. The basic idea is to encapsulate all messages between systems in SOAP messages. These messages can then use a unified transport mechanism to reach their destination and with DSS this can be another machine with no further coding!

The first service to arrive from this happy relationship was the PhysicalBufferDevice service. This service is responsible for low-level reading and writing to and from an associated file using asynchronous I/O together with coordination of resizing operations.

The next one up is the ContainerBufferDevice service that deals with clusters of PhysicalBufferDevices.

Following that is the CachingBufferDevice service that not only deals with clusters of PhysicalBufferDevices like ContainerBufferDevice but also uses optimised buffer caching to increase node performance.

Since DSS is being used a new hosting environment was devised to ensure we can control how our DB services are started and obviously control who has access to the service instances.

Still with me? Good! Well all this is wrapped up in the Audio File-System NT service. The purpose of this is to allow the upstream database core to distribute not only files but caching too to multiple machines - this will be extremely scalable and promises to have scope beyond the Audio DB project.

Now you can see why I've been too busy to post... Anyways the NT service is complete and is undergoing final testing. Once this has been completed I will be able to do some proper stress testing and assuming all goes as well as I expect (haha) I'll be able to continue with the next layer up and something tells me that there is another layer in front of what was the next layer - I will be needing a file-system unification layer for all those distributed file-system services...

Sunday, September 03, 2006

Concurrency Messiah

Well it's a funny old game this programming lark and every so often you come across something so ground-breaking that it quite simply takes your breath away - today's breath-taking event concerns a new piece of pre-release software from those good old folk at Microsoft; this .NET toolkit known as the Robotics Studio and despite having robots in mind it comes with a fantastic toolkit for helping with multi-threaded applications and this database is very threaded indeed...

From initial experiments I will be able to fully recode the BufferDevices and all the Locking Primatives to make use of this new technology and seriously reduce the complexity of the underlying software - yes folks it's another piece of reengineering ahead and I think the 27 errors I have at the moment will slowly but surely expand before I get the codebase back under control - shame really as the table row persistence was almost finished too!!

However this is a worthwhile excersize as a fully thread-safe maintainable piece of code is not an easy thing to achieve but right now it is looking entirely possible! I am not looking forward to entering the world of locks - they were a nightmare the first, second and third time around!