Friday, February 07, 2014

Neverwinter Nights Diamond Edition, Video Lag, Switchable Graphics

Recently I broke out Neverwinter Nights Diamond Edition (v1.69) again on my HP dv6 Pavilion laptop, but despite the AMD Radeon HD 6490M that the laptop comes with, I found almost unplayable video lag happening whenever I tried to move around in the game. It looked and felt like <1 FPS.


I did a quick google and there was mention here that NWN does not like multiple processors, and setting the CPU Affinity in nwnplayer.ini should fix it.
[Game Options]Client CPU Affinity=1

It did not fix the issue for me, so I had another look. Now my dv6 comes with Switchable Graphics - which is a feature where the laptop has both Intel Graphics and the Radeon HD 6490M, and can automatically switch between the two for different applications in order to save power and avoid heat.

Naturally I had checked that my Catalyst Control Center had Neverwinter Nights set for "High Performance" GPU when trying to play the game. However I noticed that when I used Neverwinter Night's graphics configuration utility, it initially complained about not finding the appropriate drivers. I dug deeper and found this issue: OpenGL Applications Cannot Use Discrete GPU with Intel + AMD Switchable Graphics

This issue affects a range of HP laptops with switchable graphics, including the Pavilion dv6/dv7/g4/g6 and the new ENVY series. The fix was surprisingly easy (documented in the link above). I had to go into my laptop BIOS and switch the Switchable Graphics mode from Dynamic to Static, which meant only one graphics card would be active at any time (instead of splitting graphics card usage by application), and then go into "High Performance GPU" mode whenever I wanted to play Neverwinter Nights.

Saturday, January 11, 2014

Building an executable JAR with maven

Making a JAR executable

One common thing I come across is building executable JARs in maven, i.e. JARs that you can run as:
java -jar myapp.jar
The following maven snippet makes the result JAR executable - that is com.myapp.Main (or your main class of choice) is executed when the jar is run by java as above. It does so by setting the main class in your JAR manifest. (You can learn more about main classes and manifests here)


   <build>
      <plugins>
         ...
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-jar-plugin</artifactId>
            <configuration>
               <archive>
                  <manifest>
                     <mainClass>com.myapp.Main</mainClass>
                  </manifest>
               </archive>
            </configuration>
         </plugin>
         ...
      </plugins>
   </build>

However, if your JAR has any dependencies, you may get class not found errors because java cannot find the dependencies. One way is to pull in those dependencies by specifying a class path, e.g.:
java -cp "mydependency1.jar;mydependency2.jar;lib/*" -jar myapp.jar
either specifying each jar or the wildcards (if using Java 6 or higher).

Distributing with a subdirectory of dependencies

Another way is to get your JAR to put those dependencies in a subdirectory when building the JAR, and reference them in the manifest. This is more convenient if you are packaging it to run on another machine or in a installer.
   <build>
      <plugins>
         ...
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-jar-plugin</artifactId>
            <configuration>
               <archive>
                  <manifest>
                     <addClasspath>true</addClasspath>
                     <mainClass>com.myapp.Main</mainClass>
                     <classpathPrefix>lib/</classpathPrefix>
                  </manifest>
               </archive>
            </configuration>
         </plugin>
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-dependency-plugin</artifactId>
            <executions>
               <execution>
                  <id>copy-dependencies</id>
                  <phase>package</phase>
                  <goals>
                     <goal>copy-dependencies</goal>
                  </goals>
                  <configuration>
                     <outputDirectory>${project.build.directory}/lib</outputDirectory>
                  </configuration>
               </execution>
            </executions>
         </plugin>
         ...
      </plugins>
   </build>
This does two things -- get maven-dependency-plugin to copy the dependencies (and transitive dependencies) at package time into a lib subdirectory, and then tell maven-jar-plugin to add the classpath entries when writing the manifest for your JAR.


Distributing as a single JAR (containing all dependencies)


The above method is great, but sometimes you want to have everything in a single JAR. You can use maven-assembly-plugin instead to build a single jar containing everything.

   <build>
      <plugins>
         ...
         <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
               <descriptorRefs>
                  <descriptorRef>jar-with-dependencies</descriptorRef>
               </descriptorRefs>
               <archive>
                  <manifest>
                     <mainClass>com.myapp.Main</mainClass>
                  </manifest>
               </archive>
            </configuration>
            <executions>
               <execution>
                  <id>make-my-jar-with-dependencies</id>
                  <phase>package</phase>
                  <goals>
                     <goal>single</goal>
                  </goals>
               </execution>
            </executions>
         </plugin>
         ...
      </plugins>
   </build>

This will, in addition to the normal output jar, make another jar e.g. "myapp-jar-with-dependencies.jar" that contains all the classes and resources from the dependencies. You can run this directly:
java -jar myapp-jar-with-dependencies.jar
Note here that we are using maven-assembly-plugin to specify the main class instead of maven-jar-plugin as well.

This method does not always work. It comes with a warning that some libraries may not run properly if processed like that - I have not come across a case so far but recommend it only for small apps.


Thursday, January 09, 2014

Passing command line parameters to the maven release plugin

Today I re-learnt some things:

Passing command line parameters to the maven release plugin

When using the maven release plugin, e.g. with "release:prepare", the plugin will run a subprocess "mvn clean verify --no-plugin-updates" on the project. This will not pass any arguments that you specified to maven, example:
mvn -Pthis-profile -Duse.property=that release:clean release:prepare release:perform
will not pass those parameters to the subprocess. You need to define them with -Darguments to pass them, like:
mvn "-Darguments=-Pthis-profile -Duse.property=that" release:clean release:prepare release:perform 
See http://maven.apache.org/maven-release/maven-release-plugin/prepare-mojo.html#arguments

Why doesn't it work for me? Parent pom overwrite?

Note that if you use parent poms, they (or their ancestors) may specify the <arguments> configuration for the maven release plugin and passing in parameters via -Darguments will stop working.
<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-release-plugin</artifactId>
   ...
   <configuration>
      <arguments>-Dmy.property=this</arguments>
      ...
   </configuration>
</plugin>
the correct way would have been to define it like that:
      <arguments>-Dmy.property=this ${arguments}</arguments>
so that -Darguments would continue to work. If you find this, you can either change the parent pom or overwrite the configuration again in your project pom to allow -Darguments to work.

Passing multiple properties and shell escaping

When passing multiple arguments via the -Darguments method, the mind may logically use something like:
-Darguments='-Pthis-profile -Duse.property=that'
however this is not proper shell escaping, and the properties may not be passed correctly (you might end up passing a profile name of "this-profile -Duse.property=that"). You want:
"-Darguments=-Pthis-profile -Duse.property=that"

Proper googling

When you want to find information on -Darguments on maven, searching "maven -Darguments" is probably what you'll first search, but hold on -- search syntax means that will search for "maven" and - yes - omit all results with "Dargument" (which are probably the results you want). "maven Darguments" did the trick instead.



Wednesday, January 01, 2014

Migrate from SVN to Git/Github

Today I migrated the PyFileServer codebase from its original home in SVN on BerliOS to Github, without losing commit history

Here's how I did it.

First, I created a git version of the SVN repository locally using git-svn.
The first thing you need is a text file (users.txt) that maps your SVN users to Git/Github users, in the format:
user1 = First Last Name <email@address>





then you call:
/fs/migration> git svn clone --stdlayout -A users.txt svn://svn.berlios.de/pyfilesync pyfilesync
the --stdlayout flag indicates that the SVN repository follows the trunk/branches/tags standard structure, which aids in identification of branches. Git-svn will pull the commits from SVN and populate them in your repository as Git commits. Note: If it meets a user that is not in your users.txt file, the process will stop, but you can always fix the users.txt file, and then peruse into the repository and use "git svn fetch" to resume the process (you don't have to re-specify the users.txt location when calling git svn fetch).

This process automatically copies over the svn trunk as the local git master, but the svn branches and tags remain in the git repository as remote branches. To see these branches, use:
/fs/migration/pyfilesync> git branch -r

to "copy" the remote branches over as local branches, use:
/fs/migration/pyfilesync> git checkout -b <new local branch name> <remote branch name>
e.g.
/fs/migration/pyfilesync> git checkout -b code-review code-review
/fs/migration/pyfilesync> git checkout -b paste-prune paste-prune
...
SVN tags are also copied over as branches, and you have to re-tag them as Git tags if you wish.


Next, I created the Github repository project, and then cloned it onto my local computer.

/fs/migration> git clone https://github.com/cwho/pyfileserver.git

At this point I have my svn-import git repo at /fs/migration/pyfilesync, and my Github-cloned git repo at /fs/migration/pyfileserver

Then I went into the pyfileserver git repository, and added the pyfilesync repository as a remote repository:
/fs/migration/pyfileserver> git remote add -f svnimport ../pyfilesync
and merged the changes in the master branch (the pyfileserver repo is currently on master)
/fs/migration/pyfileserver> git merge svnimport/master
You have to bring over the remaining branches as well
/fs/migration/pyfileserver> git checkout -b code-review svnimport/code-review
/fs/migration/pyfileserver> git checkout -b paste-prune svnimport/paste-prune
... 

Finally, I push the new commits on the local pyfileserver git repository back to Github. All done!


The general schematic of what I did above looks like this:



References: