Archive for the 'University' Category

Quiet

Sunday, June 15th, 2008

You may have noticed that it’s been a little quiet around here.

This would be because I am working on my patch to provide Cache Cohereny Event Profiling for Cachegrind, a Valgrind tool.

This is a university project for Imperial College London and is due in on Wednesday 18th June.

Further updates, including code and a pretty report will be available online once this is over.

PANIC will be over soon :-)

PANIC!

Friday, April 18th, 2008

It’s exam season!

Exams week 1

Exams week 2

Cache Coherency Event Profiling using Cachegrind, a Valgrind Tool

Tuesday, January 22nd, 2008

I am currently writing a patch for Cachegrind to add the following functionality:

  • Simulation of multiple caches (cores)
  • Simulation of various Cache Coherency Protocols
  • Profiling of Cache Coherency Events

Please watch this space for updates and patch downloads.

If you are interested in this patch, please contact me by emailing andy@andymillar.co.uk.

Venue, a 3D virtual multimedia environment built upon publish/subscribe networking

Sunday, January 6th, 2008

Venue, a 3D virtual multimedia environment built upon pub/sub (publish/subscribe) networking, was developed by Andy Millar, Jack Griffith, Sean Lavelle, Jee Heng and Marten Mark and supervised by Dr Paul Kelly.

Abstract:

The Publish/Subscribe architecture is a well known model that allows for greater scalability due to message filtering which reduces network traffic. One of the network platforms normally associated in investigating this concept would be the massively multiplayer online game. At the same time, the increasing popularity of video conferencing applications has led our group to examine the possibility of integrating these two very different concepts together.
The main aim of this project is to explore the potential for the Publish/Subscribe model to support such an application. By presenting our Performance Analysis results and demonstrating the features and benefits of our multimedia conferencing platform prototype, we have created a proof-of-concept application that supports the notion of the Publish/Subscribe model, one which could possibly have a commercial demand.

Venue 

Venue, pictured above, is a Proof of Concept application created to demonstrate that publish/subscribe is a viable network architecture for multimedia conferencing including real-time audio and video. This is in no-way a completed product, merely a demonstration of what is possible.

There were two main goals of the project:

  • Produce a working proof of concept application.
  • Produce sufficient performance analysis data to identify if pub/sub is a viable solution.

The project was broken down into multiple components:

  • Broker: responsible for managing subscriptions and routing packets between clients.
  • Client: user-interface to demonstrate the capabilities of the system.
  • AI: framework for performance analysis of both the protocol and the implementation.

The components fit together as shown below:

Venue

The broker, which I was responsible for developing, implements a general pub/sub protocol that is non-application specific. This means that any application able to implement this basic protocol would be able to use the broker.

The broker understands 4 packet types:

  • LOGIN
  • LOGOUT
  • SUBSCRIBE
  • UNSUBSCRIBE 

Any other packets are treated as “user” packets and are published based on rules defined by the above packets. The packet has the following format,

[packetlength][topic]=[payload]“, 

where packetlength is a static 6 bytes long and counts the number of bytes remaining to be read from the socket to fully read the rest of the packet and topic is in the form “,entry[,entry[…]]“.

With the exception of the above 4 packet types, the broker will never examine the payload of a packet. This is ideal as it allows clients to publish any form of data in any format, for example the proof of concept client publishes zlib compressed video data.

To take full advantage of the publish/subscribe concept the client (and AI) broke the world down into small areas, named cubes. As the client moved around the map, it would modify its subscriptions to ensure that it received data from the correct cubes.

The project was a success. As you can see below, we were able to render multiple clients with real-time audio and video onto a relatively large map. There was no significant performance degredation as the number of clients increased during our user-testing.

Venue

Since we were able to produce a working multimedia conferencing application we then moved onto our second and final objective, the performance analysis.

Since Dr. Paul Kelly is the groupleader of Imperial College London’s Software Performance Optimisation group, it came as no surprise to us that performance would be involved at some point. Given a throughput target of 50,000 packets per second, the AI kicked into gear and we saw how it performed.

Venue

We managed a peak throughput of roughly 20,000 packets per second with optimal parameters. We were later told that the target of 50,000 packets per second was a little optimistic.

We also benchmarked the effectiveness of the pub/sub protocol with varying cube size. The graph below shows the proportion of administrative (subscribe/unsubscribe) packets compared to the total number of packets processed by the system.

Venue

More information about this project may be available on request. For more information, please contact me by emailing andy@andymillar.co.uk.

1: Graphical Output and Input

Monday, October 8th, 2007

Lecture Notes

This first lecture on Graphics by Duncan Gillies covered:

  • Device Independent Graphics
  • Raster Graphics
  • Normalised Device Coordinates
  • World Coordinates
  • The Normalisation Transformation
  • Output Primitives
  • Input Primitives

Device Dependence

Wherever possible, applications should be developed to be device independent. To provide device independence, Graphics APIs (Application Programmer’s Interfaces) have been developed. Examples of such APIs are OpenGL and DirectX. These APIs (well, OpenGL specifically, since DirectX is a Microsoft API) allow a developer to create a graphics application that can work cross-platform on the majority of modern Operating Systems.

Raster Graphics

Most graphics devices are of Raster type, and allow the developer to plot points. The developer must specify the X and Y coordinate and the colour of the point that they wish to plot. For Example:

setPixel(XCoord, YCoord, Colour);

This leads to two problems:

  • What happens on different screen resolutions or application window sizes?
  • What if a different operating system addresses pixels differently?

World Coordinate System

World Coordinates are “Real World” coordinates that are mapped onto the pixel coordinates of a window. For example, an Architect may choose to use meters as a unit of measurement instead of pixels. The World Coordinates can be specified for a particular window using code similar to:

setWindowWorldCoordinates(WXMin, WXMax, WYMin, WYMax);

This is merely chosen for the developer’s convenience, and “World Coordinates” can mapped onto pixel coordinates at a later stage in the application before the pixel is modified. This allows the application to be size and resolution independent.

Graphics Primitives

Now that an internal system of points has been defined (the World Coordinates), the developer can now draw pictures using commands like:

drawLine(x1, y1, x2, y2);
drawCircle(centreX, centreY, radius);
drawText(x, y, “Text”);

Parts of the drawing that are outside of the application window are clipped so that a picture will not effect outside its’ window.

Device independent graphics systems extensively use attributes. These can be used to set colours and fonts, where it would be tedious to have to specify the data on every command.

Normalisation

In order to implement a World Coordinate system, we need a method of mapping World Coordinates to actual pixel coordinates. Doing this is fairly straight forward. Firstly use the graphics API of your choice to obtain the maximum and minimum pixel coordinates:

getWindowPixelCoordinates(DXMin, DXMax, DYMin, DYMax);

Then, translate your World Coordinates into Pixel coordinates using the following translation:

Xd = Xw * A + B
Yd = Yw * C + D

Where:

A = (DXMax – DXMin) / (WXMax – WXMin)
B = -WXMin(DXMax – DXMin) / WXMax – WXMin) + DXMin

With similar equations for C and D, merely using the appropriate maximum and minimum Y values.

Viewports

Some graphics APIs allow for further separation of windows by providing viewports (subsections) of the display window.

Input Devices

The most important input device with respect to graphics, and the only device that this course really covers, is the mouse. This device records:

  • Distance moved in X Direction
  • Distance moved in Y Direction
  • Status of Buttons

The mouse causes a system interrupt every time it is moved or a button is pressed, and is is the responsibility of the application to track this movement and to display a pointer or marker.

Callback

The Operating System must share control of the mouse with the application since it has to handle events outside the application window. It therefore traps all mouse events and then passes them on to the application. The application must, after every action, return to a callback procedure (or event loop) to determine if any mouse events have been happened and, if so, to process them. A sample callback procedure may look something like:

while (executing) {
if (mouseEvent()) processMouseEvent();
if (windowResizeEvent()) redrawGraphics();
// do stuff
}

Device Independent Input

Typically, a user will move a pointer on the screen and use this to click on a button. This identifies a pixel, and for device independent input, its address can be translated into a users world coordinate system.

Two forms of visible marker are common. The first is called a Locator and is typically implemented as a cross-hair or pointer. The second is called a rubber band, and is indicated by a line or box originating from a fixed point to the point indicated by the mouse register. In most cases, this functionality is provided by the system software.

Group Project Assignment: Event-driven massively-multiplayer gaming platform

Friday, October 5th, 2007

Ok, so we’ve now been assigned our group projects. We have roughly 3 months to complete the project, with an estimated 8 weeks coding time.

The group is:

  • Me
  • Jack
  • Jee
  • Marten
  • Sean

The “brief” for the project so far is:

Nutshell: build a massively-multiplayer game, from scratch, using a pub/sub network as a foundation. Build a simulator to study its performance and scalability.

Background:

In a massively-multiplayer interactive game, thousands of players around the internet interact in a shared, virtual environment. For this experience to be truly “immersive”, interactions between players need to be rendered accurately within a few tens of milliseconds. The goal of this project is to explore a decentralised approach to achieving this, by building on top of a “publish-subscribe” network. The pub/sub network infrastructure delivers events about the state of game agents to other agents on a “need-to-know” basis; an agent subscribes by specifying the class of events in which it is interested. Players only subscribe to events from agents whose behaviour is visible or might affect them. The objective of this project is to build the experimental infrastructure to explore performance issues in such a game, and in particular, to explore the performance issues involved in the pub/sub routing infrastructure – in fact, we aim to produce a pub/sub benchmark program. Which is also hopefully a motivating, even entertaining, game.

Breakdown:

This project has a number of elements:

  • Client game engine (actually it would be fine to use an existing engine here, preferably open source and cross-platform.
  • World/level generator. We need to be able to generate non-trivial virtual worlds algorithmically, so that we can automatically scale the size of the game and the number of players.
  • Virtual player AI. As well as human players we need robot players, both to make the game interesting and to be able to run automatic performance tests with large numbers of simulated players.
  • Publish/subscribe message routing infrastructure. We need a simple design that can be made very very fast – but which is extensible to include game-specific event filters (such as level-of-detail [“he’s too far away to see or hit me”], or clipping [“he’s behind a wall”]).
  • Network simulator. We need to evaluate the scheme using a large network, together with robot players and servers, to study the performance of the game – to show message volume and interaction delay increases with number of players (and world size etc). I suggest an event-driven simulation harness that coordinates execution of client, broker and server code running as separate processes.

This is a big project, so our objective is to simplify as much as possible – and to structure it so there are several partially-separable parts. It has the potential to be quite a lot of fun.

Deepthroating Gherkins

Monday, July 9th, 2007

It seems that a while ago, at a party, one of my (now) flatmates had a few to drink and decided that deepthroating a gherkin would be a good idea.

In true party style, this was captured on video and is below:

So far, there have been 14,100 views. Ken is famous!