Around 2010? Idk, it was a while ago...but around this time I found this cool game called battleships forever, if you havent hear of it, well, google is your friend. Battleships forever is a space ship game based on ships put together section by section, that can fight other ships. I really liked the game, but it lacked many features...no multiplayer, no way of testing your ship against opponents in realtime..., the dev had disappeared, and there was only so much you could do. So I wanted to try making something better.
Summer 2011, I had this idea, what if, I made a game that was 3d, and you could make ships out of 3d components? How would that work? Thoughts aflood with these new ideas, I began drawing pictures of modular spaceships, and planning how it would all work.
At this time I had never programmed before (well, using an actual programming language, I had programmed with programming blocks, in the blender game engine before), but a friend introduced me to unity3d. Intrigued by the possibilities of the engine, I made a few code experiments, by following tutorials, and discoverd that programming wasn't that hard after all.
With the tool, and the idea, I began work on the project that's been eating up all my freetime since that summer in 2011. (well, almost all, I still play games once in a while!). In the first few months I had a crappy module put togther thing, that let you place modules onto each other. The code sucked,(it was my first coded program after all), so i ended up starting from scratch with the new knowledge I had gained. This next time I finished around the end of 2012/early 2013. The code was somewhat better, so i was able to get much farther before the codebase was spagettified from inexperience. I had a simple ship builder, (that had many limitations), that lets you place modules, and a separate scene that allowed for loading your creations and simulating them with unitys physics engine. It...worked, but it was very limited. Adding new modules was complicated, the graphics were, not that great. Even worse, the codebase looked like, well, a mess. I had learned so much though, I believed I knew how to redo it without spagettifieing the code base, and I really, really wanted to finish the game and make it good. So I started fresh, copied what I could, and began again.
Two years later, I have something, I think, that is almost ready to be shared with the world. There are still bugs, still lots of features to add, but the codebase is clean, the graphics are good, the engine supports construction even while your being shot at mid battle, (though not sure if that should be included in the game), and I think most of the time consuming huge features are in. (It even has basic multiplayer support! well, sortof, still needs a ui, lobbies, etc. But the codebase itself supports everything in multiplayer!).
So, thats my story, right now I am working on features, and a website, I hope to have the site finished before the summer so I can start building a community and getting alpha testers. Oh, and the name on the blog is not the final name for the game, still thinking of one actually (if you have any ideas, i'm all ears!).
Devblog for Assimilation, a game that features modular construction of space craft, Newtonian physics, and some other interesting things.
Wednesday, April 29, 2015
Sunday, March 22, 2015
Vector styled graphics
Its been a while since I last posted a blog post, but I have still been working on this project. To give the game a much better feel I decided to work on the graphics.
Since I am good at modeling 3d objects, but rather terrible at texturing them, I decided to write a script to generate a texture map. The script generates an edge highlight map, it essentially darkens any areas close to a sharp edge, based on the distance to the edge. Its also possible to change how sensitive it is to edges. ie: at 10, it will only count an edge as sharp if the angle between the normal's of its faces is greater then 10 degrees. Below you can see an example:
I also wrote a shader. The shader takes the information in the edge and draws a solid color, depending on the distance to the edge. This allows it to produce a "thick" edge highlight affect, such as the image below:
Below, an image of a simple spaceship using the script and shader:
As you can see, there are some artifacts, this is due to some defects in the script, and aliasing. After fixing the script and adding antiailiasing+bloom for the glow, we get a much cleaner look:
Its still a bit tricky with depth perception at this point, so I made the shader a surface shader, by combining unity5's new lighting model, and the previous edge highlight shader. This makes it affected by lights and produces a nice affect:
Whats really cool about the script though, is that it works for any 3d model, so that means the only bottleneck for adding new modules is how fast I can model one and import it into unity. This is really exciting because It gives the project a unique look, and it makes it possible to have lots of modules.
Saturday, October 11, 2014
Flight Controller Logic
Over the summer, I worked on the flight controller. The purpose of the flight controller is to make it easy for any ordinary human or AI to fly an unbalanced spaceship in a game with simulated physics. In space, the laws of inertia become very significant, as anything put in motion will tend to stay in motion. The flight controller would do two things: compensate for inertia so that the craft is easier to control, and allow an unbalanced spaceship to fly straight.
The first step I took, was to try and create an algorithm that would take take a direction/torque as an input, and would accelerate the assembly in said direction/rotation. I figured once I had this, the rest would be relatively easy.
The current way the algorithm works is as follows:
The first step I took, was to try and create an algorithm that would take take a direction/torque as an input, and would accelerate the assembly in said direction/rotation. I figured once I had this, the rest would be relatively easy.
The current way the algorithm works is as follows:
- The force and torque vectors each engine will exert on the assembly are calculated and stored.
- These force/torque vectors are then grouped together to form 6 dimensional vectors, one for each engine.
- These vectors are then transferred into a matrix class as column vectors.
- An additional target column vector is added to the far right of the matrix, this is the "target acceleration" input vector.
- An algorithm puts the matrix into reduced row form using Gaussian elimination. (if you think of the matrix as a system of equations, it essentially solves for the unknowns)
- The last column vector in the matrix now has a list of values. Essentially these are the ratios of the engines to each other required to accelerate in the input vector's direction.
- These values are then scaled to create the maximum thrust possible and the enignes are turned on.
While the algorithm technically works, there are some flaws:
- It does not take into account the fact that engines cannot fire in reverse, so some of the output values can be negative depending on the input and the engines available. This results in engines firing backwards.
- This solution works for only 6 engines exactly, and usually only if their force/torque vectors are linearly independent (ie only one engine for each direction).
So far I have found a partial workaround for the second flaw. If the number of engines is fewer then 6, multiplying the target vector and the original matrix of force/torque values by the transpose of the original matrix creates a matrix with as many solutions as you have engines. This can then be solved to get ratio values that will fire the engines with a resultant force/torque as close as possible to the input target force/torque vector.
Friday, May 9, 2014
Video Demo
Finally found some time to make a short demo video, by comparison to that last one I think its coming out very well!
Basically the video shows a few things:
The camera rotation tracking basically rotates the camera as if it was parented to the selected assembly, the location tracking does the same except minus the rotating part. The cool thing about the rotation tracking part is that you can still orbit the assembly normally with the mouse while it is rotating.
It is currently very difficult to control a built ship, part of this is because the only way to rotate the ship is by using engines, which are too powerful except for large assemblies/ships. The other reason is because of the physics, once you start rotating or increasing the speed, rotational and directional inertia takes over and you continue spinning/moving until there is a counter force or torque.
To solve the first issue with rotation, I will likely add some RCS thrusters (basically thrusters for small changes in momentum or rotation) and a reaction wheel (magic science devise used to exert a rotational force on an object, for the science behind it, here)
.
The second issue is a little more difficult, but the basic principle behind it is to have a flight computer that tells the engines to slow the ship down after you accelerate, or slows the rotation down after you rotate. The computer would use available engines, and would be able to increase or decrease the thrust on each one as needed. In this way non symmetrical ships would be flyable. Using the flight computer, the ship would move more like people are used too regarding simpler games, while still using physics. The computer of course would be optional for those who want direct control. I plan to make it so both the flight computer and direct control can be used at the same time.
Basically the video shows a few things:
- The new graphics/art all that good stuff
- Camera tracking
- Camera rotation tracking
- The current input system
- Engine functionality
- Laser effects
- Toggling in and out of edit mode
- Selecting different assemblies
- How hard it is to fly
The camera rotation tracking basically rotates the camera as if it was parented to the selected assembly, the location tracking does the same except minus the rotating part. The cool thing about the rotation tracking part is that you can still orbit the assembly normally with the mouse while it is rotating.
It is currently very difficult to control a built ship, part of this is because the only way to rotate the ship is by using engines, which are too powerful except for large assemblies/ships. The other reason is because of the physics, once you start rotating or increasing the speed, rotational and directional inertia takes over and you continue spinning/moving until there is a counter force or torque.
To solve the first issue with rotation, I will likely add some RCS thrusters (basically thrusters for small changes in momentum or rotation) and a reaction wheel (magic science devise used to exert a rotational force on an object, for the science behind it, here)
.
The second issue is a little more difficult, but the basic principle behind it is to have a flight computer that tells the engines to slow the ship down after you accelerate, or slows the rotation down after you rotate. The computer would use available engines, and would be able to increase or decrease the thrust on each one as needed. In this way non symmetrical ships would be flyable. Using the flight computer, the ship would move more like people are used too regarding simpler games, while still using physics. The computer of course would be optional for those who want direct control. I plan to make it so both the flight computer and direct control can be used at the same time.
Thursday, April 17, 2014
Update
I have been working on the multiplayer aspect of the game for quite a bit, as a result, most of the infrastructure for it is now in place. As it is right now it is playable, however there are a few things missing, mainly client side prediction. So if you go too fast, the client receives positional update information that is to old, this causes your ship to jump backwards and jitter a lot when you are going fast.
The camera control has been much improved, now the camera will track to the center of mass of your assembly, and in addition there is a key that will toggle rotation tracking. Rotation tracking just causes the camera to rotate with the assembly if the assembly rotates.
Below are some screen shots showing some significant graphics updates, a bit of the logic system, and the laser module:
Friday, February 28, 2014
Physics Multiplayer
Its been a while since I posted anything here, been very busy, both in college and working on this project. While I have been working on the project quite a bit, I haven't updated this blog much. Much has been changed, its practically a playable demo at this point.
This being a space ship game though, there should be few enough collisions that this should not be a problem.
Multiplayer, has been the most recent feature I have been trying to implement. Adding multiplayer to a game like this is very challenging for several reasons:
- This game is physics based.
- It has to be as lag free as possible for direct user feedback
That's it.
Only problem is, those two things generally don't go together, especially when using unitys physics, here is why:
In most multiplayer games, you have a server and a client, the server runs the official version of the game, and the client runs a similar version that the server updates to be in sync with the server. Even with a very good internet connection, the delay between sending information to a server and receiving the resulting update of the simulation can range from .2-.4 seconds. Say if you press the space key to jump, the signal is sent from your computer to the server in .1-.2 seconds. The server receives the signal, tells your player to jump, then sends the updated position back to the client. So it takes .2-.4 seconds for your command to appear on the clients display.
To fix this delay, client side prediction can be used, how this works, is your player moves immediately after receiving a command, and sends the command to the server. The server then sends its results back to the player. However, this data from the server cannot be used to update the player, its .2-.4 seconds old. The server is constantly sending out its idea of where the player should be. So if this old data is used to correct the players position, then client side prediction will be pretty much invalidated.
To Compensate for this, the data the server sends can be "checked" with what the players position was in the past. If the two don't match closely enough, then an extrapolation is made of where the player would be assuming the two did match. The player is then moved there.
To Compensate for this, the data the server sends can be "checked" with what the players position was in the past. If the two don't match closely enough, then an extrapolation is made of where the player would be assuming the two did match. The player is then moved there.
The server also has to update the other players, since each of them also has a simulation of the world. The flowchart above shows roughly how this would work.
One of the problems with client side prediction though, is you need to predict something based off of old information. With unitys physics, you cannot do this. Unitys physics cannot "rewind" or "resimulate" nor "predict" any part of the physics system. That why making a physics multiplayer is very challenging, especially when using unity.
To get around this problem, you could simulate some data yourself, you would have to have your own simple physics engine essentially. What my project will do, is use a custom rigidbody system that supports "predicting" where it will be in the future. Unfortunately, it will not be possible to check collisions, so those will have to be handled server side, which will likely result in lag glitches during a collision.
This being a space ship game though, there should be few enough collisions that this should not be a problem.
Saturday, December 21, 2013
The Action System
Been very busy during the college semester. Getting used to college, then trying to keep up, has left little room for this project. However, finals are now over and I have nearly a month off, so time to get some more stuff done.
This past week and in my spare time over the semester I have been working on the action system. In an earlier post I described the action system as a relationship between the user, the assembly, and the actions that the assembly is capable of executing through its modules. When I went to implement this, I discovered it would not be as simple as it first appeared.
The action system was going to be broken into three parts: the easy to use simple logic system (this would allow direct designation of keys to specific actions), the logic/node editor (this would allow for very complex programming without the user writing any code), and the code editor (just like it sounds, you have to write code to program stuff).
The first part, the easy to use logic system, was going to be implemented as fast and easy as possible while still allowing for expansion. Then later on, when more of the game functionality was added, the second part would be implemented, which would allow the complex node/logic editing in addition to the simple keybinding. The two are unfortunately inseparable. In order to implement the first part, the second part (or at least the underlying infrastructure) must exist first.
So this past week, and in my spare time over the semester, I have been working on the underlying infrastructure for the node editor. If you are familiar with programming, you can see how a node editor can be used to do logic. Each node in the editor will be broken into as many as three different parts: an input, an activator, and an action. The input is some sort of trigger or change, so you could use a variable change as well as a keyboard press as an input. Each input can be connected to one or more activators. As well, each activator can be connected to one or more inputs. The purpose of the activator is to handle the inputs and activate actions based on them. Each activator can also connect to one or more actions. The action is simply a link to a premade function within the code of a module. Anything passed to the action upon its activation, will also be sent to its target function. This target function can also be an input, so upon its activation it can trigger another activator. This depends on the actual code inside the action function, but it allows for some very complex behaviors.
Each input/activator/action together makes a node. The user will be able to use these nodes to create the desired behavior.
The above system has been mostly implemented so far, with the exception of passing in or handling values, saving and loading the logic, and actual nodes. It works well for simple things like a keypress. One of the reasons that actions have to activate a target (predetermined) function on the module, instead actual in game functions that are required for the game to function is this: it limits the user to within the modules actual capabilities. ie: You cannot do something with the module unless it has been programmed to be able to do that. In this way you can controll the spacecraft however you like; your only limitation is its limitations. This makes sense, you wouldn't want someone changing, their max health, or making themselves indestructible, or other hacks.
Subscribe to:
Posts (Atom)