Inverse Bell Curve
There has been a tendency to produce games using bell curves recently. Roll 2 or 3 six sided dice and add the results together, you end up with a pattern where there is a higher probability of getting central values, and a lower chance of getting outliers (either high or low). The FATE system is similar, using dice that generate results of "+","-", or "0", you roll a bunch then end up with values centred around zero, gradually getting less chance of extreme positive results or extreme negative results.
But is it possible to come up with the exact opposite? A method of randomising results that has a lower probability of generating these central values, and more chance of extreme outcomes (whether positive or negative)?
I've been contemplating this for a couple of hours and haven't been able to come up with a good solution yet.
"Why would you want this?" I hear you ask.
There are a few reasons.
Firstly, because "middle of the road" is safe and boring. The extremes are where the fun lies, the critical hits and the critical misses are the events that make an anecdote interesting. They may not be good for survival, but they really drive a story to interesting places.
Secondly, because it's different to everything else out there...this is probably an over-generalisation, a flat d20 roll is pretty common in a lot of games, and there could easily be games that I haven't considered. What I'm looking for isn't a game mechanism that drives conformity, but something that instinctively pushes the envelope.
Thirdly (and this is where my true motivations lie), because I'm trying to work out a hit location grid that tends to produce hits in the outer parts of the grid rather than the centre-mass. The basic concept is a robotic AI that can suffer damage as the body is hit...the core of the AI is in the middle of the grid, and the less valuable parts of the programming radiate from it. When a hit is scored, I want it to have a better chance of hitting those areas on the outer edge (the skills and upgrades) rather than the inner core (where the true AI sits)...but there still needs to be a chance of hitting that inner core, otherwise there's no risk.
Just toying with the idea at the moment, too many other things to focus on, but any suggestions would be appreciated.
Comments