Official development blog

Adventures in Map Zooming, Part 1: Realtime Image Scaling

A few years back I introduced an experiment to demonstrate a potential “overmap” implementation that still obeys the rules of the terminal interface. What about the opposite--a way to zoom the map itself? Obviously this would be intended for a completely different purpose, addressing one of the more common complaints about Cogmind, that on some displays and play environments everything is rather hard to see.

Prior to this I’ve always framed the zooming discussion as a full-UI thing, where not just the map but everything needs to be larger, which is kind of a show-stopper when there is a minimum number of text elements required to be visible at all times for proper play as designed. But maybe if we only zoomed the map it’d work for some people who otherwise can’t play?

It’s hard to say whether this would satisfy some people since there’s still the text elements, but maybe for example using an alternative font like Terminus is sufficient for those parts, and the map is what we should focus on. Anyway, it could be worth experimenting with, and I’ve moved up the timeline for doing that.

Why now?

I’ve always been interested in experimenting with larger alternative interface layouts, though because I didn’t see much promise in them, and doing so deviates from the core design, the idea was to wait until at least a likely engine update down the road, as well as the completion of most of Cogmind’s content.

Well, this year money issues have had an influence on my near-term direction :P

Revenue has fallen quite a lot, I mean it has been over 10 years of dev at this point, and now that Cogmind is being developed at a bigger loss I need to start worrying about revenue again… (this will also likely lead to some release timeline adjustments in the future, too).

Anyway, to the issue at hand, hopefully with a larger map view Cogmind will be able to appeal to enough additional people who are otherwise okay with the rest of the game--I know there are some out there, and it’ll be better for revenue going forward!

On that note, I must thank all patrons for making the ongoing expansions much more feasible. I’ll admit expansion-level content releases for a niche game without explicitly charging for them isn’t really feasible forever, but I don’t want to split the game world into DLCs--it’d be bad for the design so it’s all or nothing, and there is just so much cool stuff still to add. It must be done. It will be done :)

It’s time to experiment!

Mockups

Any proper UI work is likely to start out with mockups--might as well play around with relatively simple images before investing a greater effort into code…

cogmind_zoom_map_mockup_size18

There you have it, a mockup depicting Cogmind using a size 18 font (Terminus for better readability overall) combined with all map tiles doubled in size (1080p@16:9, the most common Cogmind player resolution). (open for full size)

Okay so each tile actually occupies four times the usual amount of space, but what we mean here is that the cell dimensions are doubled. Something in between 1.0x and 2.0x might be more ideal from a size and visual balance perspective, but doubling is more feasible for retaining the actual aesthetics, both in terms of cell alignment and pixel accuracy, and might also provide us with other benefits later.

The mockup is also missing some components that eventually must be considered, such as what text over the map might look like as far as object labels and other info overlays, but that’s not important right now, more of a detail to consider when the time comes, assuming the fundamental feature even works out at all.

Architecture

It’s time to enter… the zoomiverse!

cogmind_zoom_map_imagebased_wip_blooper1

Okay this is an early blooper, we’ll get to that in a moment ;)

Just how to implement selective zooming in a terminal emulator that is made to do no such thing is a bit of a dilemma.

Terminal-based engines don’t behave like a normal game engine where you have individual windows represented essentially as images layered on top of one another and their contents can therefore be scaled individually. Instead there are many layers of cell data from different subconsoles that feed into a single root console which is then converted to the final image.

Under this kind of architecture it’s not all that reasonable to insert images into the mix, or make modifications to entire subconsole properties of the variety which are easy and obvious to apply to images. We can’t go “oh sure just tell the computer to zoom that window and we’re done with it!”

Sticking to the terminal system’s constraints is great for helping maintain visual consistency, and keeping the overall architecture and interface relatively simple, but if we want to zoom the map things are going to get complicated beyond the scope of what the engine can normally do on its own.

Over time I’ve brainstormed 4 different theoretical approaches to zooming the map, and most recently expanded that to 5~6 (an exact count depends on how different one needs to be in order to be considered unique). Some are more involved than others, and each comes with their own tradeoffs, though having no actual experience with this feature in practice, its true complexity and scope are not immediately apparent. Therefore it makes sense to start with the easiest, least intrusive option regardless of all other factors, just as an experiment to gain a better understanding of what the results feel like, and collect a list of design issues that would need to be tackled to make this a reality.

Realtime Scaling

The first and simplest method is absolute brute force (of course :P). Let’s stretch some pixels.

Sounds easy enough to take the map area and blow it up, yeah? Well, not really xD. Cogmind doesn’t know anything about images, and the engine doesn’t know anything about Cogmind or its UI structure, so we’re going to need a little extra communication between the two on this point.

Basic steps describe the cooperative process:

  1. Cogmind registers a callback function with the engine, letting it know that it wants to zoom an area of the interface every frame.
  2. When the engine is about to render a frame, it first lets this function know about it and expects to be handed an image in return. That image is created by Cogmind itself by forcing the interface to render [mostly] normally in between frames, copying a central portion of the map view directly to an image, then scaling it up to fit the normal view dimensions.
  3. The engine finishes rendering its normal frame, then at the end of that frame takes the zoomed image sent by Cogmind and copies it over to the desired area before displaying the final results on the screen.

If this sounds terrible, that’s because it is :P

The performance of realtime software scaling is no good, with a first iteration tanking my FPS from 240 to 40, and that one wasn’t even working right. Once it got “fixed” the real FPS was more like 24, definitely far below acceptable.

But it did work! Hacky and incomplete though it may be…

cogmind_zoom_map_imagebased

Still image of a zoomed Cogmind map, working in game, based on realtime software scaling.

Here it is in action:

cogmind_zoom_map_imagebased

Them’s some big ASCII--a working realtime software-scaled map view in Cogmind.

I say “in action,” but really only keyboard input would work in this test since the mouse still doesn’t know anything about this image resize weirdness.

Essentially in this form it’s nowhere near complete, and not performant either. While there are plenty of potential optimizations, optimizing this kind of architecture makes little sense since much of the work would become obsolete by an inevitable switch to hardware acceleration, yeah? Scaling and copying a few images would be nothing for a GPU, but for now Cogmind is CPU-bound and that isn’t changing in the near term.

A few other problems I noted:

  • Tons of artifacts created when toggling the zoom (I believe any such zoom feature would need realtime toggling).
  • An image-based approach is not directly compatible with some other interactive visual systems that expect to have cell-specific knowledge at various locations, so there would need to be a layer of translation that tends to complicate things.
  • Not only is an image-based map unable to take advantage of the normal dirty rect system, in fact it requires turning off that system completely, meaning the engine is always rendering every frame in full. That’s pretty slow, compounding with the software scaling work. In my fullscreen tests, forcing a full render every frame in the normal game gives an FPS of 60*, while realtime image scaling drops it to 25. I managed to up it to 30 with one optimization, but it’s still far from ideal, plus you don’t really want to constantly be rendering at max speed in the first place, even as a cost of getting a map zooming feature. *These speeds are in my dev build, which has a lower FPS than released versions, so I’m just looking at it for relative comparison. (Aside: “Dirty rects” are an important concept in gamedev, whereby you keep track of known areas of the screen that have changed since the last frame, for example defined by a list of rectangles, and only update those areas during the render, since any unchanged areas should remain the same and don’t need to be updated. This is also where artifacts may originate in a game’s display, where perhaps an area changed but was never marked “dirty” for updating along with everything else.)
  • The zoomed area would need more nuance, probably in the form of a mask, since it doesn’t distinguish what’s in the target area at all and simply scales everything, so you try to hack a machine and get this…
cogmind_zoom_map_imagebased_hacking

Many different UI elements can appear over parts of the map view, not just the map itself :/

As with any project dealing with rendering work, this one had its fair share of funky bloopers along the way. One of the main issues was a logic error causing the map to repeatedly zoom itself, resulting in a recursive scaling effect.

cogmind_zoom_map_imagebased_wip_blooper2

Had some funky problems with coordinates for zoom centering, too :P

cogmind_zoom_map_imagebased_wip_blooper3

The Next Step

The map zooming implementation as shared here isn’t ideal, or even acceptable, though the good news is I do have that longer list of possible methods, and will be working with one of the better, if more involved, ones that could solve all the issues presented here.

A smarter approach should play by the engine’s rules, but will take a while longer to implement. The important thing is that playing with this idea showed me what it would feel like and gave me an opportunity to formulate concepts for some promising complementary features which might make this more feasible in a UX sense as well.

Before properly testing out those ideas, however, I’ll need to put together an implementation that won’t try to fry my CPU ;)

This is the first in a five-part adventure through the process of putting all this together:

Posted in Dev Series: Map Zooming | Tagged , , , , | Leave a comment

Experimenting with Drone PIP

Early this month I was streaming a Cogmind run on Twitch, at one point once again sending out drones to scout around as I like to do. Drones are generally pretty fast, and I’ll often just make sure I’m in a relatively safe spot and keep an eye on what the drone(s) find while I wait and decide where to go next.

The drone centering and following feature added back in Beta 10 has been quite useful for this purpose. (For years before that you had to actually scroll the map yourself!)

cogmind_drone_cycling_with_fov

Cycling through active drones to see their surroundings, which also allows passing turns while following their view.

But in this particular stream as I sat there waiting to see what they’d uncover, I also kinda felt like it was a situation in which I already knew where I wanted to head, but it would be somewhat tedious to both move and keep an eye on what the distant drones were up to before they splat on some trap or were blasted by a hostile squad.

I blurted out that we need a drone PIP (picture-in-picture) feature to make it easier to operate while scouting drones are active, and of course ever since I mentioned that it stuck in my head, just asking to be explored as a feature. So I found some time to try it out a little while ago!

The technical premise is pretty simple: Since Cogmind is a monospace terminal interface, when drawing the map we just have to find some space in the view where we can copy over a portion of the cell data from the drone’s surroundings to somewhere the player can see.

Finding this space isn’t too difficult since Cogmind mechanics and UI were designed to fit a 4:3 display anyway, despite most everyone these days using 16:9 or higher, meaning there is usually a decent amount of space available at the left and right sides of the map view which is not as vital to be able to see at all times. Those are the spaces we can cover with other things, and is also why you see non-modal info presented in those four corners for various purposes.

cogmind_map_corner_ui_element_sample_excerpts

Sample info that can overlap map edges: Detailed combat log to the top-left/left, audio log in the top right, special mode menus to the bottom left, and achievement notifications in the bottom-right corner.

Anyway, I wouldn’t be worried about coming up with room for a proper implementation, though I was interested in seeing how it feels when active, since having such an interface feature working even in experimental form has a way of triggering deeper thoughts about potential roadblocks or related features.

Experimental PIP

Lo and behold, a basic drone PIP in action…

cogmind_drone_pip_test

Pretty cool! This didn’t exactly take all that long to implement for testing, but that’s all it is here, just a quick and dirty experiment thrown together in order to watch it, not even 50 lines of code or so.

The process is quite simple: When rendering the map, check if there’s an active drone and pause the rendering to first force another render around the drone’s own location; save the console info from around the drone and return to finish the normal map render, then when that’s complete copy the drone visual back over the map area wherever the PIP should appear. In the sample above that’s the top-left corner.

I’m not planning on adding this feature for now, though building it would require thinking about how the window interacts with other UI content that can appear in locations it might occupy, and the best default location. It might even be interesting outside the map, up in the top-center console, though that comes with its own problems in addition to that area’s relatively limited height.

There’s also questions about whether and how to get the window itself to properly display other UI-related content that can appear in drone-explored areas, for example (and most importantly) item/enemy labels that don’t otherwise show as you can see in the demo above.

Proper PIP

What would it take to turn this into a real feature, besides not being pretty hackish code :P

Well the PIP only needs to activate once the drone is no longer visible on the map view, but okay that’s super low-hanging fruit, let’s try the more complicated stuff…

  • Maybe window dragging to actually put the PIP monitor wherever you want
  • If currently following/locked onto a distant drone in the main UI, the PIP could perhaps switch to show the area around Cogmind’s position instead
  • Multiple simultaneous PIP panels for more than one active distant drone (now I’m imagining having a boatload of combat drones out working for you and Cogmind is like the security guy just watching everything play out on a dozen monitors xD)
  • Setting your own “PIP point”, for example around a Terminal with an active tracking Trojan
  • Of course there’d be customizable PIP size

As you can see, this one UI feature spawns lots of neat ideas…

It’s also a ton of work though, and based on its effort:usefulness ratio I have to assign this one a pretty low priority. Although shelved for now, I might revisit it alongside the next big drone mechanics update I want to do?

Posted in GUI | Tagged , | Leave a comment

Projectile Deflection

Ever since my first design docs from Cogmind pre-alpha, written back in 2013, I always wanted a projectile deflection mechanic. Like it doesn’t get much cooler than that, being able to swat away bullets or even send beams back at an attacker, or redirect them into other targets.

Every few years I think about the concept again, and it finds its way into my ongoing notes in yet another location in some slightly-altered form or alongside some other content idea, begging to be implemented. At the same time, being so cool it’s likely one of those rarer abilities that ends up more suited to unique parts, and even after ten years of development had yet to find a proper home on Tau Ceti IV.

One day several months ago I decided the wait had gone on long enough and this mechanic needed to happen now. Who cares if I didn’t even have a specific use case yet?

This marks the first time in Cogmind history that I implemented a new mechanic before actually having a spot for it in game--I had no idea what item(s) or robot(s) would make use of this feature. But I did know that we’re now in the Era of Cool Expansions outside the core game, so I’d find a place before long, and unfortunately around that time I also need a smaller project to tackle while getting my health back on track after repeated concussions (seriously annoying and a huge drag on development for much of this year). The opportunity was ripe to implement something fun and compartmentalized, as opposed to the grand projects I was supposed to be working on at the time which would take a lot more brain power.

Not Recommended

In gamedev it’s generally a Bad Idea to do this, building mechanics for your game that you aren’t ready to apply anywhere. Having one or more detailed use cases helps you think through your specific needs and acceptable limitations, otherwise when it comes time to use a generically-implemented feature for an unplanned use case, you may either find there are problems or insufficiencies and have to rewrite it, or eventually discover that you did too much work because you’re never going to actually use all those features or options you built in anyway :P

I already know I’ve built a system that’s probably more flexible than I’ll make use of, based on a lot of hypotheticals as opposed to real world scenarios, but at least it includes a set of features I do know I want.

The main other aspect to worry about when prematurely constructing any non-simple system is that when it comes time to use it and you find the system doesn’t quite fit your real world needs, you then have to go back to make adjustments, which can easily be more time consuming than it would have been if the initial implementation happened at the same time, given complete familiarity with the system at the time. It’s just potential for extra overhead, really, but dammit I wanted to see projectiles bouncing around!

Aside: This whole section strongly reminds me of those who design, build, and expand game engines without actually using them to make a game, though that’s on a different scale altogether.

Flexible Deflection

So I tried to cover all the bases by building a flexible implementation for projectile deflection, identifying tweakable parameters in order to hopefully support a range of fairly different items, of course all still centered around the idea of making sure projectiles hit something other than their intended target ;)

Among the main features of this system:

  • random deflection within a specified relative arc
  • deflection that always counterattacks the source
  • redirecting projectiles to target nearby hostiles
  • categorization of projectiles for deflection purposes, and the ability to specify a category to deflect, or deflect all of them

Architecturally, deflection of projectiles is somewhat related to the penetration and guided waypoint systems, which served as a good starting point for implementation. The main requirement is to be able to give the projectile a new trajectory at the right time and let it continue on its way.

On that note, it’s nice that there were actually multiple indirect benefits of building a deflection system, improving the game in other areas along the way, including addressing the longstanding issue of guided projectiles always treating weapon ranges from the original source, rather than by measuring the actual path traveled, enabling them to exceed their maximum range in some cases. It wasn’t too much of an issue, hence why I’d left it like that for years, but this also factors into play with deflected projectiles, so it was about time to do something about it--projectiles traveling along a non-straight path now measure cumulative range for accurate distance calculations:

cogmind_projectile_cumulative_range_limiting_guided

Firing a guided projectile to the left here and reversing its direction will not reach the targets on the right, but if firing to the right they are all in range. Before this update, firing to the left and circling back around would still work. Not a useful tactical situation by any definition, just demonstrating the difference in action :)

Shiny Applications

So while I had no spot in Cogmind for deflection just yet, or any plans to even include it in the next release, the soonest I saw it appearing was with the Unchained, a deadly group of hostile unique derelicts working for MAIN.C.

And no doubt beyond that as well, since there are a good number of ways to use this new tech. When we think of deflection, the obvious use case is in defensive shield technology, but most any item is allowed to have deflective properties, meaning we must have an energy sword that can deflect incoming projectiles. Because duh.

Below are some recordings made while I was building the system. Note they’re all just test shots with fake items I was using to make sure everything works as expected, so don’t read anything into that aspect. These are also not usually very realistic scenarios, either, just having a bit of fun :)

projectile deflection demo

The very first system test, deflecting grenades back in the direction they came from within a 90-degree arc.

 

projectile deflection into enemies

Similarly deflecting incoming shots within an arc, but sending them back into an enemy within that arc. Cogmind is not firing a shot here, just passing turns :P

 

projectile shedding deflection

Deflecting kinetic projectiles backwards, like shedding them off to the side.

 

missile deflection into enemies

A 360-degree deflection capability redirecting an incoming missile at a nearby group of Swarmers.

 

projectile bouncing ring

Spectating a dangerous game of hot grenade with the local opposing Demo squads.

 

projectile deflection animation

The system also supports per-item animations and SFX for the deflection, seen here using a random placeholder to confirm that it works. (You can also see there deflection benefiting from the cumulative projectile range limitation feature, unlike the earlier shots before that was implemented.)

 

projectile deflection in combat log

Also yes, deflection is recorded in the all-important revamped combat log coming to the next version.

Well I can finally cross deflection off my list!

Funny enough, back when I first wrote about building this feature on Patreon, there really was no intent to add any deflection-related items to the next release, which is already packed with so much other stuff. Fast forward to a couple months later and there are already no less than three different such items xD

This is the fifth and final post in a series on new item mechanics. I didn’t cover anywhere near everything (or even the coolest mechanics because I don’t like to spoil much :P), but some of these also offer a chance for relevant discussion of the bigger picture:

Posted in Mechanics | Tagged | Leave a comment

A Simple Approach to Player-Designed Robots

I like the idea of designing robots, putting together builds for a particular purpose or with particular capabilities in mind. As I’ve stated many times before, my first influence for Cogmind was the original BattleTech board game, where my friends and I wouldn’t just take stock mechs, but designed our own based on the rules, selecting the right combo of weapons, heat sinks, armor, and special tech.

battletech_mech_sheet_sample

A BattleTech loadout sheet. Sadly all my old BT books and records are at a relative’s house right now, otherwise I might go through them and share a thing or two :)

 

mwo_mech_loadout_sample

A Mechwarrior Online build. I also played all the early PC BT games, and later MW spinoffs and related games, all the way up to MWO (though had to stop some years into it because for some reason my new computer kept glitching out on it, but it’s probably for the best :P).

Even more than being a tactical roguelike, Cogmind is a strategic game about engaging in a dynamic form of this process, repeatedly replacing and upgrading components as you go. And based on the current situation, or plans for what’s to come, you can even pivot your whole build at one point or another.

So if bot-building is a main feature, and players of Cogmind therefore most likely enjoy bot-building, then perhaps there are other areas we could apply this activity? What about designing robots other than yourself?

Actually Player 2 mode sort of has this quality to it, since even if your opinionated ally is technically responsible for putting themselves together, they can only do so using parts they have access to, which like in your case are acquired entirely from their surroundings. If you can control what they get their hands on (e.g. by destroying or stealing whatever you don’t want them to have), you can indirectly control what they are more likely to build. And when they need parts to complete or improve their build, you can also drop stuff nearby and see if they’re interested in it. It’s not quite reliably building your own robot from scratch, but you can influence the outcome, so there’s that.

cogmind_player2_loadout_samples

Player 2 builds shared by various players over the years.

Some players have been known to take Player 2 sculpting to the next level and even try to carve specific parts off their friend (chop ’em up!) in order to force them to use alternatives, either from their inventory or provided by Cogmind.

But yeah, clearly still not the same thing :P

How else could we add robot construction? Well I don’t really want* to go as far as creating a full-blown in-game design system with a dedicated interactive UI like you might find in some sort of RTS or games with a vehicle design element. That’d be fun, but overboard.

*Oh no, on the heels of my previous post about special modes and their experimental test bed nature, I realize that yes I do actually want to try that, and some forms of this could make for a very interesting event xD

It’s got to be simpler than that… How about simply dropping a bunch of parts on the ground and telling them to be a robot?

The Botcube

Say hello to the Botcube, and the new friend it will create for you, or more specifically the new friend you will create with it.

cogmind_botcube_art

It’s a cube. It makes a bot.

Usage is indeed quite simple: Drop the Botcube on the ground and interact with it in the normal way (‘>’ or left-click) and it will start the creation process, turning itself into a brand new robot. Hopefully by that time you’ve prepared a collection of suitable parts for it to use, lying around on the ground within range (or maybe are just dropping Botcube in a pile of post-battle wreckage to see what happens?).

Every couple turns it will suck up a new part and merge with it, until either all potential slots are full or there are no more compatible parts nearby. Then beep boop your new friend will self-activate.

cogmind_botcube_building_tiles

Creating a Botcube mutant. In this test recording, a robot created by the Botcube is represented with a Mutant tile, but will have their own once released.

Your history log will conveniently record the feat.

cogmind_botcube_history_log_sample

The history log also includes some basic information about what kind of Botcube friend you created.

Don’t forget to inspect your formidable new ally!

cogmind_botcube_building_ascii_with_info

Inspecting the Mutated Botcube’s final stats. Its core attributes are static, though as with most robots the majority of their capabilities are defined by their loadout.

Implementation of this feature was somewhat quicker than it otherwise would have been because I already had a template for the process of building a bot from nearby parts, originally used in Abominations, one of the advantages I wrote about in the previous post.

Complementary QoL

As a player what do you need most to facilitate using this sort of tech? At least some way to tell which parts the Botcube is compatible with. The answer is most, but there are a few exceptions, generally including things that AI-controlled bots can’t really use, or aren’t suitable for certain types of allies from an architectural standpoint.

In order to help out with that, while standing on the Botcube (the only position from which to activate it) all nearby compatible parts will intermittently flash. Anything not flashing will be ignored.

cogmind_botcube_compatible_part_highlighting

Botcube compatibility highlighting demo.

This also coincidentally makes it obvious which items are within range to be utilized--double QoL!

Ancient History and Distant Future

In implementing this feature, I’m reminded of one of the times Cogmind was written about in Rock Paper Shotgun years ago, where the author assumed you would also be “building your own allies” in the game. Anyone who’s played in the years since will know that’s not really much of a thing aside from fabricating bots based on existing designs, at least not in the sense implied by the original assumption (designing them as well), but I guess we can have some of that now :)

I bet there will be a range of new theorycrafting discussions to go along with this mechanic as well. Really looking forward to what people do with this one.

On that note, one area of concern here is that Cogmind’s allies are notoriously disposable, by design. You put effort into collecting a few parts you want to see merged into one bot, or even *gasp* take time to plan one with care, only to have it gunned down by the next squad. Yeah that would suck.

We can’t exactly make them invincible, either, but to at least protect against rapid death they’re given high EM resistance, a decent amount of core integrity, and low-ish exposure. With your level of control you can of course also choose to armor them for better coverage, or improve other stats you think will help them survive, or aid you in your situation. They’re essentially akin to a customized Hero of Zion.

There are also more possibilities for this tech, which I had actually revisited in my notes a couple years back to specify that it would be appropriate as a piece of primary content within yet another new map that might happen further down the line. But since I do think this mechanic will be fun to play with, and there’s no guarantee that other map will actually happen, I think this is a good opportunity to at least provide a taste of it!

I can also use this opportunity to hint at its origins for the lore.

cogmind_botcube_art_two

:D

This is the fourth post in a series on new item mechanics. I won’t be covering anywhere near everything (or even the coolest mechanics because I don’t like to spoil much :P), but some of these also offer a chance for relevant discussion of the bigger picture:

Posted in Mechanics | Tagged , | Leave a comment

Special Events Give Back, and Perfect Stealth

Cogmind’s “special modes,” timed events with unique mechanics, can in one sense be seen as the experimental test beds of Cogmind. Sometimes ideas come along that are interesting to play with, but may either not be suitable for the regular game, or I don’t feel the effort and architectural requirements to support them are worth it in the bigger picture compared to what they add vs. all the other content options awaiting development. And although I don’t usually go into building such a mode with the intent to test ideas potentially applicable in the regular game, the results often do inspire such features down the line.

One new example of this phenomenon at work is the ID Mask.

Finally, Perfect Stealth

In short, the ID Mask is a new consumable “disguise utility” that allows you to travel completely unnoticed and untracked in Complex 0b10, waltzing right past enemies if you want to.

cogmind_id_mask_info

The holy grail of stealth tech, if you’re not welcome in Complex 0b10.

The ability to disguise oneself like this has been requested by players since Cogmind’s alpha days (shout out to its main proponent, zxc!), but to me it wasn’t the sort of tech that the game was ready for, be it for balance, architectural, or lore reasons. A confluence of factors have contributed to now being the time it’s finally ready to be designed in.

Using an ID Mask is fairly straightforward, just pop it on to start the clock and enjoy your anonymity for as long as it lasts. I’m sure it can save your butt, or enable some sneaky tactics.

cogmind_id_mask_usage

Hm, more Behemoths maybe calls for more masks? :P

These will also fit deeper into the lore, as I prefer any tech does in order to exist, though some of that lore (and more sources to acquire one!) will be coming in future versions beyond the initial inclusion.

Polymind and Other Test Beds

If you played Polymind you’ll probably recognize this ability, which is actually where the architecture comes from in the first place. As special events do, Polymind introduced new mechanics that needed to be built into the system, both showing us that they’re possible, and also allowing everyone to test how they play out.

Several major new features were required for Polymind to work, one of which being the central idea that “0b10 bots can ignore you.” It’s possible there might still be some kinks in there somewhere when it comes to special cases, but it should at least work as well as it did for Polymind, and adjustments or fixes can be made if necessary.

But anyway this is one example of a mechanic originally unique to that mode now becoming embodied in a specific item available in regular Cogmind! Sooner or later the same thing may happen to some of the other Polymind-specific features, but as of now this is the first.

The first from that mode, anyway.

Another feature I’ve added for Cogmind’s next release takes a chunk of code from Player 2, the mode in which you’re accompanied by what is essentially an AI-controlled Cogmind capable of building and maintaining itself from items as you can.

“P2” introduced a number of new mechanics, some of which I’ve always wanted to include in regular Cogmind (after seeing how they work) but haven’t had the chance. And no, the new feature is not a Cogmind-like ally, although that is something I would like to add to the regular game at some point if I get to it. (There are even perfect lore tie-in opportunities!)

Specifically, what I did was adapt Player 2’s contextual dialogue system for… something interesting ;)

nikolayag_player2_decisive_victory_comment_with_scatter_rocket_arrays

Player 2 commenting (at bottom) after letting loose with a ridiculously massive amount of firepower against some targets in the caves (screenshot provided by nikolayAg).

Among other previous event influences, you might notice similarities between Abominations (Halloween 2019) and the Botcube, which I’ll be writing about next time.

Despite having never proactively used special events to test planned Cogmind features, based on my experiences incorporating and adapting ideas from past modes, and liking how they save time on both implementation and design (while not necessarily ever being something that must make it into the regular game), I’ve even started planning to use this approach for future special events, a way to test features I might want to add.

Specifically for the past year or so I’ve had an idea for an event that could tie in pretty well to a future faction, so long as the gameplay works out when it becomes a central focus. It could be one of the potential Merchant systems.

This is the third post in a series on new item mechanics. I won’t be covering anywhere near everything (or even the coolest mechanics because I don’t like to spoil much :P), but some of these also offer a chance for relevant discussion of the bigger picture:

Posted in Design | Tagged , , | Leave a comment