I’m in the process of porting some very old code of mine to the new modeling core and I just got partial support for the JT Open file format working! If you’ve never heard of it, JT Open is a modern CAD transfer format. Although it has been designed by Siemens / UGS, it is intended to serves as a CAD exchange/storage format rather than coining it towards one particular application. Think Alembic for CAD: It can contain polygonal, NURBS and parametric surface information (cylinder, offset surfaces, etc), PMI (product management information) and a couple of other things your garden-variety 3D-poly-based format won’t be able to handle. On the other hand, it cannot store animations.
Well, this is another issue in “just let me add this quickly so I can test what I wanted to do originally”. What I started with was adding a Depth of Field shader to the post processing layer of my OpenGL renderer, what I ended up with was thinking a lot about model unit scale and environment maps. Let me explain…
After working on Oculus Rift DK1 support on and off for a couple of weeks, I just managed to get everything up and running. Supporting head tracking was quite easy but - as the Oculus documentation correctly states - switching the rendering back-end required some major changes.
While working on iOS, I never got around to look into frame buffer objects in any depth. If you’re like me, you just copy & pasted the multi-sampling resolve code samples from Apple document and never worried about it again. But to implement the post-processing framework for the 3D modeler project, this wasn’t enough.
It’s been a long time since the last post and the code base has grown a lot, so here is a quick update:
The modeling code is maturing quite nicely! So here is a short update:
Well, here is another first: For all of my commercial career, I have worked on Microsoft Windows only applications. All my side projects over the years where either never intended for deploying them to someone else or they were iOS code where deployment is really easy. But getting my 3D modeler code to be deployable proofed to be quite an adventure.
A couple of years ago, I was hunting for a new job and tried to find interesting companies in the 3D/CG field. After a while, I had a nice list of links that people asked me about all the time. CG is such a niche, the biggest part of job search is actually finding the company, not getting in the door.
As a stress test for the architectural design of the new input sub-system for the modeling core, I’ve started implementing various input metaphors and just finished the support for a 3DConnexion space mouse-type device. In general, it’s fairly simple and once you have registered for and obtained the SDK, it is a matter of an hour or so to get the base code running… in theory…
The basic editing mesh and file I/O extension point of the modeler core are working now. What this means is that the basic infrastructure for supporting input/export formats is operational, at least for poly-formats - haven’t started with NURBS yet. As a proof, I’ve implemented OFF, PLY and OBJ formats as those are text-based rather simple formats. The material system isn’t ready yet so it’s all plain colors right now.