Friday, February 07, 2014

Neverwinter Nights Diamond Edition, Video Lag, Switchable Graphics

Recently I broke out Neverwinter Nights Diamond Edition (v1.69) again on my HP dv6 Pavilion laptop, but despite the AMD Radeon HD 6490M that the laptop comes with, I found almost unplayable video lag happening whenever I tried to move around in the game. It looked and felt like <1 FPS.


I did a quick google and there was mention here that NWN does not like multiple processors, and setting the CPU Affinity in nwnplayer.ini should fix it.
[Game Options]Client CPU Affinity=1

It did not fix the issue for me, so I had another look. Now my dv6 comes with Switchable Graphics - which is a feature where the laptop has both Intel Graphics and the Radeon HD 6490M, and can automatically switch between the two for different applications in order to save power and avoid heat.

Naturally I had checked that my Catalyst Control Center had Neverwinter Nights set for "High Performance" GPU when trying to play the game. However I noticed that when I used Neverwinter Night's graphics configuration utility, it initially complained about not finding the appropriate drivers. I dug deeper and found this issue: OpenGL Applications Cannot Use Discrete GPU with Intel + AMD Switchable Graphics

This issue affects a range of HP laptops with switchable graphics, including the Pavilion dv6/dv7/g4/g6 and the new ENVY series. The fix was surprisingly easy (documented in the link above). I had to go into my laptop BIOS and switch the Switchable Graphics mode from Dynamic to Static, which meant only one graphics card would be active at any time (instead of splitting graphics card usage by application), and then go into "High Performance GPU" mode whenever I wanted to play Neverwinter Nights.

Saturday, January 11, 2014

Building an executable JAR with maven

Making a JAR executable

One common thing I come across is building executable JARs in maven, i.e. JARs that you can run as:
java -jar myapp.jar
The following maven snippet makes the result JAR executable - that is com.myapp.Main (or your main class of choice) is executed when the jar is run by java as above. It does so by setting the main class in your JAR manifest. (You can learn more about main classes and manifests here)


   <build>
      <plugins>
         ...
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-jar-plugin</artifactId>
            <configuration>
               <archive>
                  <manifest>
                     <mainClass>com.myapp.Main</mainClass>
                  </manifest>
               </archive>
            </configuration>
         </plugin>
         ...
      </plugins>
   </build>

However, if your JAR has any dependencies, you may get class not found errors because java cannot find the dependencies. One way is to pull in those dependencies by specifying a class path, e.g.:
java -cp "mydependency1.jar;mydependency2.jar;lib/*" -jar myapp.jar
either specifying each jar or the wildcards (if using Java 6 or higher).

Distributing with a subdirectory of dependencies

Another way is to get your JAR to put those dependencies in a subdirectory when building the JAR, and reference them in the manifest. This is more convenient if you are packaging it to run on another machine or in a installer.
   <build>
      <plugins>
         ...
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-jar-plugin</artifactId>
            <configuration>
               <archive>
                  <manifest>
                     <addClasspath>true</addClasspath>
                     <mainClass>com.myapp.Main</mainClass>
                     <classpathPrefix>lib/</classpathPrefix>
                  </manifest>
               </archive>
            </configuration>
         </plugin>
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-dependency-plugin</artifactId>
            <executions>
               <execution>
                  <id>copy-dependencies</id>
                  <phase>package</phase>
                  <goals>
                     <goal>copy-dependencies</goal>
                  </goals>
                  <configuration>
                     <outputDirectory>${project.build.directory}/lib</outputDirectory>
                  </configuration>
               </execution>
            </executions>
         </plugin>
         ...
      </plugins>
   </build>
This does two things -- get maven-dependency-plugin to copy the dependencies (and transitive dependencies) at package time into a lib subdirectory, and then tell maven-jar-plugin to add the classpath entries when writing the manifest for your JAR.


Distributing as a single JAR (containing all dependencies)


The above method is great, but sometimes you want to have everything in a single JAR. You can use maven-assembly-plugin instead to build a single jar containing everything.

   <build>
      <plugins>
         ...
         <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
               <descriptorRefs>
                  <descriptorRef>jar-with-dependencies</descriptorRef>
               </descriptorRefs>
               <archive>
                  <manifest>
                     <mainClass>com.myapp.Main</mainClass>
                  </manifest>
               </archive>
            </configuration>
            <executions>
               <execution>
                  <id>make-my-jar-with-dependencies</id>
                  <phase>package</phase>
                  <goals>
                     <goal>single</goal>
                  </goals>
               </execution>
            </executions>
         </plugin>
         ...
      </plugins>
   </build>

This will, in addition to the normal output jar, make another jar e.g. "myapp-jar-with-dependencies.jar" that contains all the classes and resources from the dependencies. You can run this directly:
java -jar myapp-jar-with-dependencies.jar
Note here that we are using maven-assembly-plugin to specify the main class instead of maven-jar-plugin as well.

This method does not always work. It comes with a warning that some libraries may not run properly if processed like that - I have not come across a case so far but recommend it only for small apps.


Thursday, January 09, 2014

Passing command line parameters to the maven release plugin

Today I re-learnt some things:

Passing command line parameters to the maven release plugin

When using the maven release plugin, e.g. with "release:prepare", the plugin will run a subprocess "mvn clean verify --no-plugin-updates" on the project. This will not pass any arguments that you specified to maven, example:
mvn -Pthis-profile -Duse.property=that release:clean release:prepare release:perform
will not pass those parameters to the subprocess. You need to define them with -Darguments to pass them, like:
mvn "-Darguments=-Pthis-profile -Duse.property=that" release:clean release:prepare release:perform 
See http://maven.apache.org/maven-release/maven-release-plugin/prepare-mojo.html#arguments

Why doesn't it work for me? Parent pom overwrite?

Note that if you use parent poms, they (or their ancestors) may specify the <arguments> configuration for the maven release plugin and passing in parameters via -Darguments will stop working.
<plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-release-plugin</artifactId>
   ...
   <configuration>
      <arguments>-Dmy.property=this</arguments>
      ...
   </configuration>
</plugin>
the correct way would have been to define it like that:
      <arguments>-Dmy.property=this ${arguments}</arguments>
so that -Darguments would continue to work. If you find this, you can either change the parent pom or overwrite the configuration again in your project pom to allow -Darguments to work.

Passing multiple properties and shell escaping

When passing multiple arguments via the -Darguments method, the mind may logically use something like:
-Darguments='-Pthis-profile -Duse.property=that'
however this is not proper shell escaping, and the properties may not be passed correctly (you might end up passing a profile name of "this-profile -Duse.property=that"). You want:
"-Darguments=-Pthis-profile -Duse.property=that"

Proper googling

When you want to find information on -Darguments on maven, searching "maven -Darguments" is probably what you'll first search, but hold on -- search syntax means that will search for "maven" and - yes - omit all results with "Dargument" (which are probably the results you want). "maven Darguments" did the trick instead.



Wednesday, January 01, 2014

Migrate from SVN to Git/Github

Today I migrated the PyFileServer codebase from its original home in SVN on BerliOS to Github, without losing commit history

Here's how I did it.

First, I created a git version of the SVN repository locally using git-svn.
The first thing you need is a text file (users.txt) that maps your SVN users to Git/Github users, in the format:
user1 = First Last Name <email@address>





then you call:
/fs/migration> git svn clone --stdlayout -A users.txt svn://svn.berlios.de/pyfilesync pyfilesync
the --stdlayout flag indicates that the SVN repository follows the trunk/branches/tags standard structure, which aids in identification of branches. Git-svn will pull the commits from SVN and populate them in your repository as Git commits. Note: If it meets a user that is not in your users.txt file, the process will stop, but you can always fix the users.txt file, and then peruse into the repository and use "git svn fetch" to resume the process (you don't have to re-specify the users.txt location when calling git svn fetch).

This process automatically copies over the svn trunk as the local git master, but the svn branches and tags remain in the git repository as remote branches. To see these branches, use:
/fs/migration/pyfilesync> git branch -r

to "copy" the remote branches over as local branches, use:
/fs/migration/pyfilesync> git checkout -b <new local branch name> <remote branch name>
e.g.
/fs/migration/pyfilesync> git checkout -b code-review code-review
/fs/migration/pyfilesync> git checkout -b paste-prune paste-prune
...
SVN tags are also copied over as branches, and you have to re-tag them as Git tags if you wish.


Next, I created the Github repository project, and then cloned it onto my local computer.

/fs/migration> git clone https://github.com/cwho/pyfileserver.git

At this point I have my svn-import git repo at /fs/migration/pyfilesync, and my Github-cloned git repo at /fs/migration/pyfileserver

Then I went into the pyfileserver git repository, and added the pyfilesync repository as a remote repository:
/fs/migration/pyfileserver> git remote add -f svnimport ../pyfilesync
and merged the changes in the master branch (the pyfileserver repo is currently on master)
/fs/migration/pyfileserver> git merge svnimport/master
You have to bring over the remaining branches as well
/fs/migration/pyfileserver> git checkout -b code-review svnimport/code-review
/fs/migration/pyfileserver> git checkout -b paste-prune svnimport/paste-prune
... 

Finally, I push the new commits on the local pyfileserver git repository back to Github. All done!


The general schematic of what I did above looks like this:



References:

Sunday, June 18, 2006

var partition filling up

Recently I had the chance to do some work for a linux server and and encountered some problems with the /var partition filling up really quickly.

du vs df
There are two common tools for looking at disk usage. df which lists all the partitions in your system, with the amount of space used/free and percentage, and du, usually run as du -sh * which gives a listing of all the files in the current directory (including subdirectories). df is helpful in finding if a partition is filling up, and du helpful in finding out which subdirectory is contributing how much to that space usage.

MySQL InnoDB
If your /var/lib/mysql is filling up, its probably innodb tables.

I had MySQL 5.0, and am using innodb tables. The innodb database (unless you configure it to use a separate file for each table, which is useful for things like separate partitions for separate dbs) uses a number of redo (temp) log files (default is /var/mysql/iblogfile0 and /var/mysql/iblogfile1) as well as a appending transaction log (default /var/mysql/ibdata1). The log files have a fixed (small ~5M) size but the transaction log will (note this!) keep growing in size and will not shrink even if you delete records - it logs transaction history, not just data.

In our case the transaction log grew too large, so we added a new autoextend file on a larger partition, with monthly note to dump and reimport all innodb data to clear the transaction log. The following documentation link has details to both
http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html


sendmail clientmqueue
Another candidate for "var filling up" troubles is /var/spool/clientmqueue . This is a mail queue that stores as files emails that could not be processed or otherwise need to be cached (for mostly retrying). This is separate from mqueue (google mqueue vs clientmqueue).

Why is clientmqueue filling up? Most likely there are undeliverable messages heading towards your users. If you are running a sendmail server, then likely your sendmail is being misconfigurated and undeliverable spam mail is being stored in clientmqueue.

If you aren't running a sendmail server, then the only way that undeliverable mail gets sent to your server is locally - invoking sendmail separately. You could check if there are automated scripts on your server sending users mail that is not being delivered. One likely cause is cron sending the root or owner user emails about cron jobs status or output. To fix this, you can add MAILTO="" to all the cron files in your system, including user/root crontabs and cron.daily, cron.weekly, etc. I also usually direct cron output to the null device by appending the cron job's command with > /dev/null, just in case.

We also found alternate workarounds like getting cron to use mail.local instead of sendmail to send emails (again google is your friend) but did not try to implement that.

Open files

The way linux treats open files is that if an open file is deleted, then the file doesn't actually get deleted until its closed. This means that for example if your application(s) is reading from/writing to a (log) file, and you delete the file from the shell, the file is not actually deleted until your application(s) close the file. Now the file is removed from the directory listing immediately, so its not apparent to the user that the file is still there and taking up space.

This is where du and df can report different stats - du, which looks at each file, will not report the space taken up by deleted open files. df which looks at disk/block usage, will. If you get a huge difference between the space usage indicated by du and df, this is usually the case.

One useful tool for seeing open files and the application/processes using them is lsof. e.g. lsof /var gives a listing of all the open files under /var as well as the process ids and associated running user using them.

In our case, we had rotated the Apache logs for the day but mistakenly left Apache such that some of its processes were still writing to the old log file (deleted), which grew to be quite big. Forcing Apache to return with httpd -k stop; httpd -k start did the trick. httpd -k restart might work as well, since I believe all processes are eventually deleted and recreated, but im not too sure when it would happen.

Sunday, May 28, 2006

Supporting CJK characters in MySQL

Recently I had to write some CJK/unicode characters from Java into MySQL. I use MySQL 5.1 and the connector mysql-connector-java-3.1.10-bin.jar .

The connector appears to be able to read CJK/unicode characters in text retrieved from queries correctly, if they were entered via MySQL query browser into the database properly as unicode. But writing CJK/unicode to the database (insert, update) using the connector using either a normal SQL statement or setString() with a PreparedStatement appeared to translate the CJK characters to a single ? (code 3F).

It appeared that I had to write the text data as bytes instead to get them written properly as unicode. I chose the encoding UTF-8. Then I changed the column specific encoding and charset for the text field to hold the CJK text as "utf8_ci" and "utf8_charset". Then to write the text in an insert operation using a PreparedStatement:

here we assume we already have the database Connection conn established, and we are trying to insert text from String cjkword:


PreparedStatement stmt = conn.prepareStatement("INSERT INTO cjktable (cjktext) VALUES ( ? )");
try {
stmt.setBytes(1, cjkword.getBytes("utf8"));
} catch(UnsupportedEncodingException e) {
stmt.setBytes(1, cjkword.getBytes());
}


the try block is to catch the exception (java.io.UnsupportedEncodingException), which occurs if you have specified an unsupported encoding. UTF8 should be supported (by Sun JDK anyway) so it should not occur.

--Edited per comments

Tuesday, May 09, 2006

Morning!

And after three months of non-blogging :), here is a view of my window at 5.20am in the morning.