-->

main-navbar

Saturday, October 11, 2014

Flight Controller Logic

Over the summer, I worked on the flight controller. The purpose of the flight controller is to make it easy for any ordinary human or AI to fly an unbalanced spaceship in a game with simulated physics. In space, the laws of inertia become very significant, as anything put in motion will tend to stay in motion. The flight controller would do two things: compensate for inertia so that the craft is easier to control, and allow an unbalanced spaceship to fly straight.

The first step I took, was to try and create an algorithm that would take take a direction/torque as an input, and would accelerate the assembly in said direction/rotation. I figured once I had this, the rest would be relatively easy.

The current way the algorithm works is as follows:

  1. The force and torque vectors each engine will exert on the assembly are calculated and stored.
  2. These force/torque vectors are then grouped together to form 6 dimensional vectors, one for each engine.
  3. These vectors are then transferred into a matrix class as column vectors.
  4. An additional target column vector is added to the far right of the matrix, this is the "target acceleration" input vector.
  5. An algorithm puts the matrix into reduced row form using Gaussian elimination. (if you think of the matrix as a system of equations, it essentially solves for the unknowns)
  6. The last column vector in the matrix now has a list of values. Essentially these are the ratios of the engines to each other required to accelerate in the input vector's direction.
  7. These values are then scaled to create the maximum thrust possible and the enignes are turned on.
While the algorithm technically works, there are some flaws:
  1. It does not take into account the fact that engines cannot fire in reverse, so some of the output values can be negative depending on the input and the engines available. This results in engines firing backwards.
  2. This solution works for only 6 engines exactly, and usually only if their force/torque vectors are linearly independent (ie only one engine for each direction).

So far I have found a partial workaround for the second flaw. If the number of engines is fewer then 6, multiplying the target vector and the original matrix of force/torque values by the transpose of the original matrix creates a matrix with as many solutions as you have engines. This can then be solved to get ratio values that will fire the engines with a resultant force/torque as close as possible to the input target force/torque vector.



Friday, May 9, 2014

Video Demo

Finally found some time to make a short demo video, by comparison to that last one I think its coming out very well!

Basically the video shows a few things:

  • The new graphics/art all that good stuff
  • Camera tracking
  • Camera rotation tracking
  • The current input system
  • Engine functionality
  • Laser effects
  • Toggling in and out of edit mode
  • Selecting different assemblies
  • How hard it is to fly

The camera rotation tracking basically rotates the camera as if it was parented to the selected assembly, the location tracking does the same except minus the rotating part. The cool thing about the rotation tracking part is that you can still orbit the assembly normally with the mouse while it is rotating.

It is currently very difficult to control a built ship, part of this is because the only way to rotate the ship is by using engines, which are too powerful except for large assemblies/ships. The other reason is because of the physics, once you start rotating or increasing the speed, rotational and directional inertia takes over and you continue spinning/moving until there is a counter force or torque.

To solve the first issue with rotation, I will likely add some RCS thrusters (basically thrusters for small changes in momentum or rotation) and a reaction wheel (magic science devise used to exert a rotational force on an object, for the science behind it, here)
.
The second issue is a little more difficult, but the basic principle behind it is to have a flight computer that tells the engines to slow the ship down after you accelerate, or slows the rotation down after you rotate. The computer would use available engines, and would be able to increase or decrease the thrust on each one as needed. In this way non symmetrical ships would be flyable. Using the flight computer, the ship would move more like people are used too regarding simpler games, while still using physics. The computer of course would be optional for those who want direct control. I plan to make it so both the flight computer and direct control can be used at the same time.


Thursday, April 17, 2014

Update

 I have been working on the multiplayer aspect of the game for quite a bit, as a result, most of the infrastructure for  it is now in place. As it is right now it is playable, however there are a few things missing, mainly client side prediction. So if you go too fast, the client receives positional update information that is to old, this causes your ship to jump backwards and jitter a lot when you are going fast.

The camera control has been much improved, now the camera will track to the center of mass of your assembly, and in addition there is a key that will toggle rotation tracking. Rotation tracking just causes the camera to rotate with the assembly if the assembly rotates. 

Below are some screen shots showing some significant graphics updates, a bit of the logic system, and the laser module:



Friday, February 28, 2014

Physics Multiplayer

Its been a while since I posted anything here, been very busy, both in college and working on this project. While I have been working on the project quite a bit, I haven't updated this blog much. Much has been changed, its practically a playable demo at this point.

Multiplayer, has been the most recent feature I have been trying to implement. Adding multiplayer to a game like this is very challenging for several reasons:
  1. This game is physics based.
  2. It has to be as lag free as possible for direct user feedback
That's it. 

Only problem is, those two things generally don't go together, especially when using unitys physics, here is why:

In most multiplayer games, you have a server and a client, the server runs the official version of the game, and the client runs a similar version that the server updates to be in sync with the server. Even with a very good internet connection, the delay between sending information to a server and receiving the resulting update of the simulation can range from .2-.4 seconds. Say if you press the space key to jump, the signal is sent from your computer to the server in .1-.2 seconds. The server receives the signal, tells your player to jump, then sends the updated position back to the client. So it takes .2-.4 seconds for your command to appear on the clients display.

To fix this delay, client side prediction can be used, how this works, is your player moves  immediately after receiving a command, and sends the command to the server. The server then sends its results back to the player. However, this data from the server cannot be used to update the player, its .2-.4 seconds old. The server is constantly sending out its idea of where the player should be. So if this old data is used to correct the players position, then client side prediction will be pretty much invalidated.
To Compensate for this, the data the server sends can be "checked" with what the players position was in the past. If the two don't match closely enough, then an extrapolation is made of where the player would be assuming the two did match. The player is then moved there.
The server also has to update the other players, since each of them also has a simulation of the world. The flowchart above shows roughly how this would work.

One of the problems with client side prediction though, is you need to predict something based off of old information. With unitys physics, you cannot do this. Unitys physics cannot "rewind" or "resimulate" nor "predict" any part of the physics system. That why making a physics multiplayer is very challenging, especially when using unity.

To get around this problem, you could simulate some data yourself, you would have to have your own simple physics engine essentially. What my project will do, is use a custom rigidbody system that supports "predicting" where it will be in the future. Unfortunately, it will not be possible to check collisions, so those will have to be handled server side, which will likely result in lag glitches during a collision.

This being a space ship game though, there should be few enough collisions that this should not be a problem.


Saturday, December 21, 2013

The Action System

Been very busy during the college semester. Getting used to college, then trying to keep up, has left little room for this project. However, finals are now over and I have nearly a month off, so time to get some more stuff done.

This past week and in my spare time over the semester I have been working on the action system. In an earlier post I described the action system as a relationship between the user, the assembly, and the actions that the assembly is capable of executing through its modules. When I went to implement this, I discovered it would not be as simple as it first appeared. 

The action system was going to be broken into three parts: the easy to use simple logic system (this would allow direct designation of keys to specific actions), the logic/node editor (this would allow for very complex programming without the user writing any code), and the code editor (just like it sounds, you have to write code to program stuff). 

The first part, the easy to use logic system, was going to be implemented as fast and easy as possible while still allowing for expansion. Then later on, when more of the game functionality was added, the second part would be implemented, which would allow the complex node/logic editing in addition to the simple keybinding. The two are unfortunately inseparable. In order to implement the first part, the second part (or at least the underlying infrastructure) must exist first. 

So this past week, and in my spare time over the semester, I have been working on the underlying infrastructure for the node editor. If you are familiar with programming, you can see how a node editor can be used to do logic. Each node in the editor will be broken into as many as three different parts: an input, an activator, and an action. The input is some sort of trigger or change, so you could use a variable change as well as a keyboard press as an input. Each input can be connected to one or more activators. As well, each activator can be connected to one or more inputs. The purpose of the activator is to handle the inputs and activate actions based on them. Each activator can also connect to one or more actions. The action is simply a link to a premade function within the code of a module. Anything passed to the action upon its activation, will also be sent to its target function. This target function can also be an input, so upon its activation it can trigger another activator. This depends on the actual code inside the action function, but it allows for some very complex behaviors.
Each input/activator/action together makes a node. The user will be able to use these nodes to create the desired behavior.


The above system has been mostly implemented so far, with the exception of passing in or handling values, saving and loading the logic, and actual nodes. It works well for simple things like a keypress. One of the reasons that actions have to activate a target (predetermined) function on the module, instead actual in game functions that are required for the game to function is this: it limits the user to within the modules actual capabilities. ie: You cannot do something with the module unless it has been programmed to be able to do that. In this way you can controll the spacecraft however you like; your only limitation is its limitations. This makes sense, you wouldn't want someone changing, their max health, or making themselves indestructible, or other hacks.


Saturday, October 19, 2013

A word on Quaternions and doing complex rotations - Solution

In my older post on Quaternions, I described a problem, but failed to show the solution. Someone commented on that, so I felt inclined to do a full post describing the solution. This post is mostly code, and what it does, just a warning to those reading.

The problem was as follows:
How do you align the faces of two objects that are facing different directions on objects facing diferent directions?

Note the setup here; a side is one of the faces that are being oriented, the gameObject is the object the side is apart of. A side has its own custom class with information about it(where it is relative to the object, its orientation, its number of geometric sides, etc).

Step 1 - Create a rotation variable that you are going to manipulate. Assign it to the gameObjects current rotation:

var rot: Quaternion = gameObject.transform.rotation; 

Step 2 - Create a function that will manipulate the rotation to point it toward the target direction, how I did it is shown below:

function GetLookRot(side: Side, rotation: Quaternion){
var rot: Quaternion;
if(rotation == null) rot = gameObject.transform.rotation;
//assigns rotation for manipulation
else rot = rotation;
//the location of the side relative to the center of the 
//object, which is the same thing as a vector that starts 
//at 0,0,0 and passes through the location of the side.
var fromLocalDir: Vector3 = (location);
//target global location (of the target side)
var target: Vector3;
//this basically finds the vector that starts at the 
//target gameobjects center and passes through the target 
//sides center, all in global coordinates
target = (side.gameObject.transform.rotation
*(-side.location))
+gameObject.transform.position;
//takes the global vector described above and makes its 
//position local, its rotation is kept global.
var toGlobalDir: Vector3 = target 
- gameObject.transform.position;
//updates the first vector by modifying it from local 
//rotation to global rotation, while still keeping its 
//local position.
var fromGlobalDir: Vector3 = rot * fromLocalDir; 
//so now we have two vectors, the first points in the 
//current direction the side is pointing, and the second 
//points in the opposite direction the target side is 
//pointing, because we want the sides to face each other, 
//rather then simply point in the same direction.
//after that we get a rotation using unity's 
//FromToRotation to get a Quaternion that describes the 
//rotation of going from the first direction to the second 
//direction, this rotation is then combined with the 
//gameobjects current orientation, by multiplying, to get a 
//rotation that points the current side to be facing the 
//target side.
return Quaternion.FromToRotation(fromGlobalDir, toGlobalDir) 
* rot; 
}



Step 3 - The next function assumes that the sides are parallel, this takes them and aligns their sides. Because it handles rotations, the sides do not actually have to be parallel at yet, just the rotation you feed this function would have to put them parallel. This step is a bit more complicated....

 //returns a rotation that would rotate so the face is aligned 
//(ie: so all the vertexes of both sides are at same pos)
//with the target face, rotates along the normal axis of the 
//side. rotates current, or if given rotation, rotates that, 
//target rotation is given through the passed side
function GetAxisRot(side: Side, rotation: Quaternion){
var rot: Quaternion;
if(rotation == null) rot = gameObject.transform.rotation;
//assigns rotation for manipulation
else rot = rotation;
//def axis is a vector that originates at the center of the 
//side and passes through a vertex on the side. This is 
//used to represent the sides rotation around the y axis
//if the side was in a horizontal position
var direction: Vector3 = (rot*(defAxis));
//this is an array that will be used to hold the diferent
//possible vectors.
var targets: Vector3[] = new Vector3[side.polySides];
//sets the array of axies
for(i = 0; i < polySides; i++){
if (i == 0) targets[i] = side.gameObject.transform.rotation*(side.defAxis);
else{
targets[i] = side.gameObject.transform.rotation*(Quaternion.AngleAxis((360/side.polySides)*i, side.location) * (side.defAxis));
}
//shows each axis for visualization
Debug.DrawRay(side.gameObject.transform.position, targets[i], Color.red);
}
//shows each axis for visualization
Debug.DrawRay(gameObject.transform.position, direction, Color.red);
//the angle between the closest direction of available directions to target,     //and the target
var angle1: float = 360;
//the angle difernce between the direction that comes before the above and the   //target
var angle2: float = 360;
//the angle diference between target direction and current direction in +-       //degrees
var angle: float;
//assigns the above
var j: int;
for(i = 0; i < polySides; i++){
if (i == 0) j = polySides-1;
else j = i-1;
if(angle1 > Vector3.Angle(direction, targets[i])) {
angle1 = Vector3.Angle(direction, targets[i]); 
angle2 = Vector3.Angle(direction, targets[j]);
}
}
if(angle2 < 360/polySides) angle = -angle1;
else angle = angle1;
//applies final rotation
var finalRot: Quaternion = rot*(Quaternion.AngleAxis(angle, location));

return finalRot;
}

Step 4 - Once you have both functions you run them like this:

var rot: Quaternion = gameObject.transform.rotation; rot = GetLookRot(side, rot); rot = GetAxisRot(side, rot); return rot;

And that is how I did it, hopefully this can be of some use to someone :)

Saturday, August 24, 2013

Actions

Been really busy the past few weeks, so haven't had much to show for the time. With college classes about to start in two weeks the available time is only going to decrease. However, I did manage to create and partially implement the action system.

What this system does, is handle module actions. This will be integrated into how the AI runs, how the user controls and configures input, and will basically give a framework for handling each modules actions. For example, an engine type module would have actions for controlling the engine. The actions for such a module would be something like: Activate, Deactivate, Set Power, etc. So to get such an engine configured, you could assign say, the spacebar, to Activate the engine when pressed, and Deactivate the engine when released.

The system works by storing a list of the modules actions in the module itself. An action in this system is a custom class that has an activate function, and a constructor. The constructor assigns the target of the action, so when the action is "activated" it calls the targeted function. in addition the constructor assigns a description and a name for the action, so that the user knows exactly what the action does. All the actions are assigned at start of the runtime to their parent module, and made available to all objects that have access to the module. The assembly itself will be the main object activating these actions, and assigning inputs to activate specific actions. These inputs will be stored on the assembly itself so that individual sets of controls can be isolated. (ie: the user controls one assembly, and an ai controls another. Their actions dont activate each other because they are assigned to separate assemblies) Basically the assembly will act as the distribution hub for all of its modules actions. Eventually it will be possible to create custom actions which would set off multiple others, sort of like programming.

With this system, it would be possible to practically let the user program the ship as they would a robot. One of the cool things about this, is a user could create a control system and share it with others who just want to fly the ship, and skip the whole configuring aspect. Or even better, a user could create their own AI. These are some of the major end goals. However, for the moment a simple "this key/button activates this action" sort of thing will be created, until after some sort of release. More complex layers of possible programming will be possible later on.

There are still some fine details to work out, like accessing module variables, but for now I hope just to be able to get a ship that is completely configured by the user.