• Troubleshooting Perl Module Installation

    I’ll be the first to admit it, I’m not a huge Perl user.  In fact, I wouldn’t even call myself a minor Perl user.  Luckily there are others out there that do great things with it, like the Bugzilla people.

    Having used it for some time, I’ve come to really appreciate all the functionality that it brings and have enjoyed the positive impact it’s had on my ability to concentrate on single problems without creating wider ones.  Unfortunately, no one said it was an easy installation.

    Having been through my own pains with Bugzilla installations before, especially with regards to sending secure email, no one was more upset than I when I needed to re-install the operating system for a server hosting our Bugzilla.

    I flashed back to annoyed thoughts of struggling with Perl modules and CPAN before I could even bother with configuration files for the other email issue.  I embarked and restored everything and tried the best I could to get all the required modules installed.  Some wouldn’t budge, however.  I kept getting errors about the lack of YAML and Make returning a bad status:

    Make had returned bad status, install seems impossible

    Installing YAML, of course, didn’t succeed so I tried my fallback fix-all of re-installing my build tools.  Luckily, this is easily accomplished on Ubuntu with:

    sudo apt-get install build-essential

    To my chagrin, I hadn’t ever installed build tools.  Of course, things now went swimmingly with important thinks like Make actually being installed.  I think this points out something interesting though, in the way we (I’m sure I’m not the first to do this) see development machines vs. production servers.  I, for example, wasn’t at all concerned with having GCC on a webserver so it never got installed.  Similarly, I found it astonishing to be reminded that a server distribution did not include such tools as well.

    What’s the moral?  Well it’s nothing clever, just always remember to install your usual tools, even if you don’t think you’ll need them.  See?  Isn’t it easy to feel smart when you catch such a silly mistake?

  • Working Faster in Eclipse

    When I started out in Java, I spent a few weeks using Netbeans quite satisfied.  It wasn’t a life changing experience, but it worked like an IDE should and I happily punched away.  After being challenged by a friend to give Eclipse a shot, I did and after a few weeks couldn’t turn back.  Something about the package was just “right” out of the box.  It felt smarter and like it was willing to work in the same vein that I work.  Opening a bracket and hitting enter brought me to an appropriate indentation on the new line.  I, of course, didn’t realize these things until needing to open Visual Studio one day and seeing how primitive working in it really was.

    With that out of the way, I’ve been happily working with Eclipse on all platforms and not needing much in the way of plugins besides Subversive for Subversion and the occasional specialty tool here or there (Android tools come to mind).  So there, everyone is caught up with my fondness of Eclipse, people that see me working on a daily basis might call it more of a fetish but I won’t go that deep into it..

    I do have another love affair with a certain text editor though, and it goes by the name of Vi.  Like my Netbeans to Eclipse transition, I used to use emacs without thinking much about it.  I knew a few of the commands and could get around reasonably well but was never really in love with it (this might have been different had I taken the time to really learn how to use it like a pro).  One day I was harassed about using emacs by a friend/Vi user.  Not feeling any particular allegiance, I tried it out to shut him up.  Just like emacs it was nothing special, but I watched my friend work with such speed that I couldn’t help but want to be like that.  I kept using it and reading up on the commands and using the cheat sheets, but soon enough I was getting the hang of it and really enjoying it.  What really got me was the sheer ease of moving things around, once the commands became muscle memory I was working faster than I ever had in any modern text editor.

    Get here by navigating to General-> Keys in the Eclipse preferences window

    Get here by navigating to General-> Keys in the Eclipse preferences window

    I went along using both applications very happy with each of their strengths but never quite making the connection of putting the capabilities together.  That is, until I hit “dd” (Vi’s delete line command) in Eclipse without thinking and was disappointed to see “dd” typed rather than watching my current line disappear.  Realizing my mistake I quickly looked up how to delete a line in Eclipse. Sure enough, Ctrl-D came up and I happily went along.

    Suddenly I felt that much faster and started feeling my excitement about using Vi coming back.  I had now taken my favorite piece of software and started working a bit like my other favorite piece of software.  A day or so later, I wanted to cut a line.  Once again I was impressed by Eclipse and found an unbound “Cut Line” command which I quickly bound to Ctrl-Alt-X.

    What’s the lesson here?  Well, besides my underlying love for Eclipse it should really be to spend more time looking for a better way to work.  I’ve learned over the last few months what people have been preaching for years, that using the tools you have at hand to their maximum can pay off dividends.  I have little doubt that I could set up Netbeans to function in a similar manner to my Eclipse installations, ditto for using emacs or Vi.  I’m just glad that I’ve dug deep enough to find what I was looking for rather than simply throwing up my arms with a “bah, it’ll never do what insert application name does”.  I feel empowered, maybe some day I’ll tackle my behemoth of an inbox.. but I won’t get ahead of myself, I’m glad for just improving one aspect of my workflow for now.  Try it for yourself, it’s a great feeling.

  • Access Network Shares From Avid Media Composer

    I suppose this is as much a personal reference as it is a public broadcast of knowledge, but here are the [few] steps to setting up Media Composer to read from network shares.

    Disclaimer: I haven’t gotten a chance to test this on OS X yet, I will update when I do.

    1. Start by mapping your network share to a drive letter in Windows.  XP, Vista, and 7 all have different places of varying visibility for this button, but it can consistently be found by right-clicking My Computer.
    2. In the window that pops up:
      • Select the drive letter that you wish to assign.  Windows should only display unused letters, but I’ve seen instances where it hasn’t.  Also be sure not to assign letters you are familiar with for removable storage (my Compact Flash reader, for example, always comes up as F:)
      • In the “Folder:” field, supply the name of location of the share.  It should look like “\RemoteComputer*SharedFolderOnComputer*“.
      • Depending on the nature of the remote operating system and the environment it is in, you may need to supply credentials other than those used to log in to your computer.  We’re set up with Active Directory here, so I use my network username/password.
    3. Start Media Composer
    4. Open the console (Tools->Console or Ctrl+F6) and type in “alldrives” (no quotes) and hit enter.  The console should respond that all drives are now active whereas only “true” drives were available.  Note that the alldrives command works as an on/off switch.  Entering the command again will disable the use of network drives

    You should now have the option to capture to the mapped network drive of your choosing.  Be aware though, depending on your network setup, working from shares can be slow.  Performance seems quite usable for now though and scrubbing is instantaneous, though playing from a new location on the timeline does take a second or two to get on track.  We’ve ordered a dedicated Intel NIC for the Avid rig in the hopes of seeing a bit of an improvement.  That being said, however, its still quite usable.

  • Finals Week… Before Finals Week?

    College.  It’s interesting for a myriad of reasons, but I seem to always be finding new ones.  As the sole person in my ring of friends to not be living at school, I lose out on the ability to party and forget about responsibilities for nights at a time, so I generally see a semester as four months of stress rather than just the last two weeks.  That said though, I’ve seen something quirky popping up and it doesn’t seem to be limited to just NYU-Poly.

    The tradition seems to have long been that you can party your life away and suddenly wake up for the finals and things will magically work out.  I, and likely millions of other students, must be living on the moon then.  The reality, as it turns out, is that finals are being brought further into the semester every year it seems.  Out of my 6 classes this semester, one has an assignment due during the normal finals week and the other has a test on its scheduled day.  Other than those exceptions, the rule seems to be that come December 9th (the last day of classes), the semester is over.

    On the outside this seems good, vacation starts earlier and stress is done earlier.. what’s not to love?  The problem, however, is that finals are now mashed into regular class time.  My rationale for typing this is that I’m currently procrastinating doing an assignment for a class where the final is due before the last homework (OK, it’s only due 24 hours before the other, but you get the point).  From a purely linear perspective, why take a final before you’ve actually learned all of the material, or in the case of homework, affirmed your knowledge of the material.  The answer is of course, that the finals are now less about content and more about the ability to write a paper or give a presentation.

    While I don’t agree with taking the focus off of content, I can’t blame anyone for being concerned with the direction in which this country’s communication skills are headed.  In the time since Al Gore invented the internet, it has slowly been perverted from a collection of knowledge to a bad billboard showing off exactly what morons are capable of when given the right tools (Evan Goer has an absolutely brilliant article on where some of this lunacy has come from).

    What we’re left with as a society is [mostly] young people capable only of expressing thoughts as either grammatically incorrect stubs or degrading personal attacks (or both).  Of course, this is a well documented phenomenon, so I won’t go into too much depth on it.  For all I care, the internet can go in whatever direction the idiot population allows it to, so long as I’m allowed a place to continue my own conversations devoid of the absurdity).

    On the other hand, what I will not accept is a higher education system that is teaching adult infants to write and communicate.  Our thirty and forty thousand dollar educations are being slowly degraded into extensions of high school rather than what we (and the world we’re supposed to inherit) deserve.

    Folks complain about Bush and Obama, but how does it feel that this will one day be able to run for president or prime minister  (Disclaimer:  I choose Fred because deep down, I actually do find him enjoyable to watch and don’t mind his antics now and then).

    What is the solution then, if the general population is sinking to such a level that Idiocracy rings truer and truer every day?  I say, let it be.  Don’t change the educational system, and don’t punish those there for a reason.  People will either wake up and join society or we will slowly turn into a coma state, but don’t hamper my capacity to learn by pushing me through middle aged writing exercises in the name of real world skills.  Grade school is for learning to be a person, university is supposed to teach you how to accomplish a task.  Get with the program or stay home.

  • The Future of Open Source Game Engines

    Designing and implementing a game is no easy task.  On top of the desire to formulate the best ideas for gameplay and storyline, there’s a technical element that is just as difficult to nail down.  Countless indie developers conceive of  a great idea but ultimately flounder and lose direction as their brainchild becomes bloated and unmanageable.

    It has been a tough struggle for small scale games to be able to break out of the realm of Flash and make the jump to a full desktop application.  The reasons for this are twofold, manpower and technical tools.  Let me qualify this seemingly short sighted statement with the assertion that games are downright hard to make.  It would follow that so much work divided over a number of people makes the task easier, but what are these folks doing once they’re all in on the plan?  Well, let’s look at the setup of how a game studio works, you’ve got digital content creators (modelers, animators, environmental designers), artists (menus, 2d assets, etc), programmers (insert code here), writers, and producers.  Phew, that’s a lot of stuff.

    Now the trick is to make it all come together somewhere along the pipeline in the chosen game engine.  The choice of a game engine is one that can’t be taken lightly, and as almost anyone who has simply picked the first will tell you, you will go back and rethink your decision.  For years, indie developers have been pretty limited in their choices of engine.  The big commercial engines traditionally cost big commercial bucks while the more affordable engines targeted at smaller developers give little hope of competing with any reasonably sized releases.

    Using something like Torque could give decent results, but any product it produces still has that small scale taste.  One of the inherent weaknesses of this particular platform is the content pipeline.  Going from Maya or 3DS to Torque was not a pleasant experience.  If you wanted animation, that was another can of worms which had to be opened (and was still never easy enough to get down to a muscle memory).  While cheap and Mac/Windows compatible, Torque is a hard sell considering what else is out there.

    Now consider Unity, an engine that has come much into the forefront in the past few years based on a few interesting paradigms.  First, its deployable to damn near any platform (including cross-platform web browsers) and the developers seem intent on keeping up with the newest stuff that can run anything with 3D graphics.  Second, its probably the first game engine to have started out as a Mac-only development toolkit.  While it was able to be played anywhere, it was only able to be developed on OS X until recently.  Also earth-shattering was the announcement this October that the indie version would now be offered free.  Fiscally speaking, saving $200 over the previous price of the package is not a huge deal, but it does take out the price factor completely.  We’re all well aware that people pirate $9 software titles just because they can, so I consider a free offering of this caliber to be quite significant.  What’s more interesting to serious indie developers is likely to be the relatively loose license that Unity allows for the indie version.

    Of course, the elephant in the room is Unreal Engine 3, who’s release as a free development kit has been a hot topic for a few weeks now.  While still a great engine, UE3 is starting to show its age when compared to the kind of work that Crytek is producing (regardless of how much we all hated the original Crysis for everything it was/wasn’t) with their own engine.  Unreal is exposing a real commercial powerhouse to everyone who wants to try it, which is great.  It allows content creators to see what their assets look like in game as well as giving aspiring game designers invaluable experience on one of the most sought-after platforms.  One important thing to note, however, is that Epic has you on the hook for some considerable cash depending on what you do with the game you or your company creates.

    While all of these steps towards the empowerment of indie developers is valuable, it is important to step back and take a look at the truly free options available.  Before the incrementally free (though still costly), there was the totally open source.  Engines like jMonkeyEngine, Delta3D, and Panda3D have found limited success in certain niche areas, but have never quite taken off like their commercial cousins.  With such open platforms are available, why is the popularity still limited?

    Drawing from my intimate knowledge of jMonkey and the wealth of users that make up its community, there are shortcomings for the project that I love so dearly.  The most common gripe that we come across in the forums and chat channel is the lack of a world editor.  For obvious reasons, this is a major feature that game houses, for good reason, find attractive.  Put simply, its hard to layout a scene in jME.  Mainly procedural work, like Betaville, don’t quite call for placing objects in the scene and setting up lighting in such an intimate nature, but it is an absolute necessity when creating a first person shooter or RPG.  Somehow, though, games get released, and those who have weathered the trenches of jME are in a good position to help the newcomers.  For some reason, this is an easier task for open-source projects rather than big commercial packages.  The community contributions and support at jME are impressive and humbling every day.  Still though, for all the excited newcomers, many are forced to jump ship when they realize the difficulties of piecing together a world line by line.

    Some likely go on to explore the Delta3D engine as it’s STAGE editor is certainly an appetizing offering.  Unfortunately, however, little in the way of support can be found on the lightly used forum.  They’re also going through their own growing pains about how to expand the project to be more self-sufficient as far as prolonged development goes.  See here for an interesting read on that particular topic.

    So, the debate between engines really boils down to two big factors, money and usability.  Although games like Grappling Hook and Mad Skills Motocross have been able to sneak through the shortcomings of open-source and see the light of day,  they are exceptions to the rule and the vast majority of released games continue to be developed in commercial engines.  It is clearly worth the price to many companies who need to answer to investors or are on rushed time frames, but how far can open source be pushed?  Can the right people at the right time make the right nuclear mixture to have gaming explode with open engines?  It hasn’t happened yet, but nothing says it can’t either.  Some will stay while others decide to move on to other engines that are available.  Personally, I’m firmly planted in open source for now and can’t wait to see where it takes gaming.