Wednesday, November 30, 2011

3D Engine: DirectX, C++, C#, SlimDX, and SharpDX

This blog post only exists to prevent expired linking. My blog has moved to: http://pabloaizpiri.com/

--------------------------------------------------------------------------------

Programming Love & 3D Engines
I have an admission to make. My true love of programming comes from game development. Sure, I love a good web app- but honestly I just love cool technology. Networking is cool. Database internals are cool. But what is a cooler technology than a 3D Engine? now that's awesome!

3D Engine and games are intriguing to me because of the technical challenges that must be addressed. Most business application address some in one way or another, but usually 3D Engines have to address all of them and be very good. Sorting? Check. Searching? Check. Drawing? Check. Complex Math? Check. AI? Sound? Hardware? Ect.. Additionally, a good 3D engine has to not only implement complex algorithms, but requires a strong architecture to manage its complexity while remaining efficient and still allowing great power. It needs to be a database, (of triangles, essentially), constantly read input, perform all sorts of logic and all these things must come together seamlessly to create a great game.

How I Started
My first experience was with LEGOMINSTORMS when I was about 12, I programmed the RCX (2.0) for cool little game with a light sensor and a moving piece of paper. Next, GameMaker, when I was 13- then 3D RAD, A5 Game Studio Engine, and finally Blitz3D. Over the last couple years I've wanted to skip all the middle ware and write a 3D engine using OpenGL or DirectX. At one point I wrote a very simple software render, but that was about as close as it got... I always had a hard time picking up DirectX or OpenGL- it seemed like SO MUCH was thrown at you just to draw a triangle... and that was fine, except I wanted to know what all that code did. On top of that I'm not super proficient in C++ which most tutorials are in.

At Last: Beginning DirectX
One weekend while I was visiting a friend in Houston, he convinced me to buy an old book he knew I was interested in. (Read it the whole time we were at at Half Price Books). It was on writing a managed 3D engine- just what I wanted! I slowly started on it and I actually started making progress. However, it wasn't long before I realized I was working with Managed DirectX 9, a boat that had sunk and MS had abandoned- but not before I had made some good progress. Encouraged by the progress, I decided I'd push forward and try once again tackling DirectX through a managed interface or library like SlimDX. I had tried to use it in the past, but it had never worked out well, but in my search I also found SharpDX- which is essentially a library of "extern" C# calls to the DirectX API. I figured I might as well stay as close as possible, since most DirectX tutorials are for C++ which call the same API. Another major reason I chose it was because of SharpDX's performance.

This turned out to be pretty great. The DirectX API is actually starting to make a lot more sense now, and I feel I could also even jump straight into C++, however, for my needs performance is already overkill- and since I can develop much faster in C# with the familiarity of the .NET framework, I'm going to stick to that. I've actually have gone far enough where I've implemented a simple object management system (for moving entities around relatively) and built a import function for  MQO models. (Still limited though) MQO is the format for the 3D Modeller Metasequoia. I know it is strange and probably obscure, little-known modeler- but I found it a few years ago through the FMS website and have found it to be an absolutely EXCELLENT simple, easy to use, and free 3D modeller. I LOVE it!

Credit Where Credit Is Due
 I have to definitely give credit to the following sources for getting me where I am right now.

Introduction to 3D Game Engine Design Using DirectX 9 and C# - This was the book that helped to somehow "flip the switch" and help DirectX make sense after reading and coding only a couple chapters... (I skimmed through 3-4 of the others and never read the rest)
http://www.two-kings.de/ - Some help with the clear explanation/tutorials
http://zophusx.byethost11.com/tutorial.php?lan=dx9&num=0 - HUGE help. This guy does through DETAIL so that I could understand. Heavily considered switching to C++....
http://www.toymaker.info/Games/html/lighting.html - Helped my understanding of shaders.
http://www.rastertek.com/tutindex.html - Helped a LOT in understanding and writing shaders for multi-texturing and special effects.

And of course Wikipedia... (if you haven't donated, but you use it, you should!) and the DirectX Documentation. It's MUCH easier to read it now that I've grasped the major concepts though I undoubtedly have quite a bit to go. And finally thank God for the Internet and search engines...  if you're persistent you'll find what you need.

The JDX Engine
So that's my newest passion- building this managed 3D engine- I couldn't really think of a name and finally settled with "JDX". When I've made a fair amount of progress, I actually want to build a Recoil clone (I played this game when I was 13 and I have fond memories of it), since that would be easy to do and not require much artistic skills. If it turns out well, I'd love to also make this 3D engine public for anyone who wants to be able to write a managed DirectX 3D game without being a DirectX expert. It will probably be fashioned somewhat after Blitz3D API, since I've always found it to be extremely intuitive.

I'll probably give updates on JDX here and there when I can. Mostly I'm teaching myself, so I'm making a lot of mistakes. I'm very open to learning how to actually write an efficient and good D3D code base since a lot of examples *work* but there seems to be many different ways to do things in D3D and I want to not just perform the task but do so efficiently. Which brings me to one of my new major pet peeves: in most tutorials there seems to be very little out there as to how to actually best write the code. (E.g. do you store a vertex buffer for each object in your world or do you attempt storing them all in the same buffer? What about object parts? How do you apply the correct shaders when you do so? Ect...)

Until next time....

UPDATE: Bought another couple books from amazon on 3D Mathematics and DirectX 10; it's helping lots. Engine is coming along well... here are some updates.

NodeJS & Simple C# HTTP Server

This blog post only exists to prevent expired linking. My blog has moved to: http://pabloaizpiri.com/

--------------------------------------------------------------------------------

So... I've read quite a bit about NodeJS and was very early introduced to it by one of my good friends, Dominic ( http://dominicbarnes.us/ ). He's a huge JavaScript fan and so NodeJS was a big hit. I like Javascript, but admittedly I am nowhere near as proficient as he is nor do I understand fully the functional paradigm needed to use it to its full extent. (He's great at pretty much everything open source/other side of the MS/Windows fence, so if you need someone like that go hire him and pay him lots of money - you won't regret it.) NodeJS got attention with it's claims to efficiency with its non-blocking programming style- the functional JavaScript would in theory make writing such code easy.

The Bleeding-edge Event Model in NodeJS?
This was a couples months ago and I was fascinated by the potential NodeJS performance gains and decided I would try writing a simple server in C# with the same model. I figured since C# is compiled, it may be at the least somewhat quicker than the Windows version of NodeJS. Of course, now it all seems silly having learned what I did about IIS through the process. I never realized most of the comparisons for performance were against Apache, and IIS already performs better than Apache anyway. Of course, that seems obvious now- what was I expecting? I suppose at times we can all be suckers for the success stories of the underdog coming out on top, but in practice that is generally not the case. Regardless, it was a fun learning experience. My proof-of-concept server did turn out to be incredibly fast, (and considerably faster than the Windows NodeJS at the time) but that doesn't mean much considering it offered limited functionality. It was a simple test; I used .NET's HTTP Request classes and didn't build my own implementation to keep simple. (which would have been a huge part of the effort) It was really fun trying to think of ways to optimize my little server. (request handling/caching/reading from disk/ect.)

A C# Server Like NodeJS
Basically it is a C# application that only has a task bar tray icon for an interface (I never actually got far enough to turn it into a service and separate the UI from the server service) and sits around waiting for requests. A main thread is the one that just sits around listening to the HTTP port[s] all the time and whenever a requests comes through hands it off to a worker thread. The worker thread checks if the request is cached in memory and if so returns it, if not depending on the extension it will either return a static file, image, or compile the script page, add it to the cache, and return it. (or just run the script page's compiled code if cached)

Having one thread completely devoted to listening for requests and passing the actual request handling to a thread pool allowed my server to respond to requests extremely quickly- this was the concurrency I was after. The cached compiled scripts would run quickly once the compilation for the page script was cached, as it was literally like running a function that returned a string. (Plus obviously the JIT will also compile to native code once the function is called for the first time)

Yes, it compiles on demand! But that wasn't as big a deal as I thought it would be. It was much easier than I thought since .NET comes with compiler libraries for C#. All my server does is some string parsing on the requested file to look for '@{' and '}@' symbols to know where the C# code begins and ends. (C# "script", anyone?) As I mentioned, compiled methods are kept in memory so that subsequent requests are extremely fast.

I realize NodeJS operates a bit differently. NodeJS listens on a single thread (main event loop) and when a request comes in, it immediately processes it. For requests what would require "blocking" functions, (such as File I/O... though technically any function call is blocking by definition), a callback is given and the "blocking" function is queued in a thread pool- this way the main even loop thread goes right back to processing and listening to requests. When the "blocking" function completes, it calls the callback function on the main event loop and the request is finished on that main thread. (As I thought through this I began to realize some potential shortcomings)

In my case, I simply created the event loop to only listen for requests, and then hand off request handling to the thread pool. This is because I couldn't be guaranteed that the page script wouldn't perform a "blocking" operation. However, had I continued with the project and had my intention been to imitate NodeJS exactly, I would have probably needed to build a library of functions that page scripts could have called to handle "blocking" functions. This is where the power or ease of the functional programming style of JavaScript would have been nice. Doing this in C# would have been ugly, but it would be much easier to adopt Javascript's call-back functional style to help segregate those "blocking operations" that should be executed in the thread pool from those that should run in the main event loop. (Technically I didn't have a "main event loop" since all mine did was handle and hand off the requests, but you get the point) In the end, it didn't really matter that I didn't go through all the lengths to simulate that, because for my tests I wrote a page script that didn't do any file I/O or any other "blocking" operations. (Which I suppose makes for poor tests, but good enough for what I needed)

Realizations and Some Final Thoughts on NodeJS & IIS
Throughout the whole time I was researching more about the IIS pipeline and began to realize IIS does pretty much the same thing as my server did as far as listening, handling off requests, and working with a thread pool to process them. (of course, it does it a whole lot better) Eventually I stopped development; the code is here if curious. (Since I abandoned the project, the "scripting" support is very limited- it supports C# since it uses the .NET compiler but the page script code doesn't have access to any server variables like POST/GET, ect making it close to useless)

So NodeJS/IIS thoughts. I think it is a cool technology and I've still got a lot to learn on the subject, but I've researched enough where I feel I have a fair opinion. I think IIS does a pretty darn good job and NodeJS model isn't exactly ground-breaking here... it's actually been around for a long time. Apache is the big web server it always seems to be compared to, and I suppose that's where there's a big win performance-wise is since Apache spawns a new thread for every request. (Maybe the just need to implement a thread pool in their pipeline?) My thoughts are you'd be hard pressed to get test results (and not just mass concurrent request tests) where an equivalent MVC.NET page written well and using IIS under-performs it's equivalent NodeJS page.

In closing, here are a couple articles I agree with, though I think he is rather harsh/offensive to the NodeJS community, but he seems to hit the nail on the head as far as analyzing performance and the NodeJS model:
http://teddziuba.com/2011/10/node-js-is-cancer.html
http://teddziuba.com/2011/10/straight-talk-on-event-loops.html

Part 2: SqlBulkCopy Class (MS SqlServer and .NET)

This blog post only exists to prevent expired linking. My blog has moved to: http://pabloaizpiri.com/

--------------------------------------------------------------------------------

At work we're making an effort to contribute technical knowledge to a centralized IT wiki. I like that. Writing encouraged a good understanding of the technology and it's benefits. I didn't think I'd have much to write but I'm surprised how some things I take for granted as simple, others didn't know and vice versa. This two part series is from those entries.



​Introduction
Normally, inserting rows into SQL server is quick and easy and done through a simple INSERT SQL statement. This is fine when saving data to one, two, or even a few rows. However, when it is necessary to insert larger sets of data, this method becomes not only functionally inadequate, but slow and clunky. In this entry (part two of a two-part series) I wanted to write about the second option we will look at for inserting large sets of data: using .NET’s SQLBulkCopy class.
The SqlBulkCopy ClassWhen it is necessary to insert more than about a 1000 rows of data, a TVP​ now begins to reach the limits of its performance gain. If we are using the TVP’s only for inserts, we can move up and dramatically increase performance by using .NET’s SqlBulkCopy class. In addition to providing the functionally for large inserts, the SQLBulkCopy class can also be used to copy large amounts of data between tables. With SQLBulkCopy we can deal with millions of rows if need be. Here is an amazing whitepaper on the SqlBulkCopy class’ performance:http://www.sqlbi.com/LinkClick.aspx?fileticket=svahq1Mpp9A%3d&tabid=169&mid=375
The SqlBulkCopy Class
Using the SQLBulkCopy class if fairly simple. The dataset you use must match the columns on the table. If the order or column names are a bit different, that’s okay since that can be handled with the SqlBulkCopy class’ ColumnMapping property which is just for that. Here’s a .NET sample of using the SqlBulkCopy class to update a table named “tblSIWellList” from a table name “MyData” within a DataSet:
Using objConnection As System.Data.SqlClient.SqlConnection = GetSQLConnection()
    objConnection.Open()
           
    Using bulkCopy As SqlBulkCopy = New SqlBulkCopy(objConnection)
        bulkCopy.DestinationTableName = "dbo.tblSIWellList"
        With bulkCopy.ColumnMappings
            .Add(New SqlBulkCopyColumnMapping("ID", "ID"))
            .Add(New SqlBulkCopyColumnMapping("EPD", "EPDate"))
            .Add(New SqlBulkCopyColumnMapping("Comments", "Comments"))
            .Add(New SqlBulkCopyColumnMapping("Date", "Date"))
            .Add(New SqlBulkCopyColumnMapping("RTP_Date", "RTPDate"))
        End With
        bulkCopy.WriteToServer(ds.Tables("MyData"))
    End Using
End Using

Because SqlBulkCopy class is designed to copy/insert a large number of rows, transactions are handled in batches. It is possible to specify how large each batch is (e.g. 5000 rows at a time) but by default a single transaction (“batch”) is used for all rows. When committing transaction in batches, a failed batch will only roll back the last active transaction in the batch. (This may not necessarily be all rows if a previous batch was successfully committed)

Considerations when using .NET’s SlqBulkCopy class
There are a few “gotcha”’s to keep in mind when using the SqlBulkCopy class:
  • When the source and destination table data types are different, SqlBulkCopy will attempt to convert to the destination data type where possible but this will incur a performance hit.
  • By default, PK’s are assigned by destination and are not preserved.
  • By default, constraints are not checked and triggers are not fired. Also row-level locks are used. Changing these setting may affect performance.​