Jump to content

obstacles question


RedWardancer

Recommended Posts

I'm setting up minefields and concertina wire in a scenario.  I know how to work with them, duplicate, change marking, and so on.  Just wondering if there is a speedier way to make them up instead of having to conjure them up and move them one by one into place.  Know how you can speed route a march order along roads?  Is there a way to do the same thing for obstacles to save time?  

 

 

Edited by RedWardancer
Link to comment
Share on other sites

47 minutes ago, Ssnake said:

Still needs individual adjustments. It's a help, but it'd be much easier if you could just define a polyline and then turn that into some linear obstacle.

 

Sure but then some of us would want to be able to specify "Turn", "Block", "Disrupt" or "Fix" inside the belt / zone. ;)

 

Another Pandora's Box / be careful what you wish for area.

Link to comment
Share on other sites

  • 3 weeks later...

Has anyone noticed a major problem with breaching operations?  I've run some tests recently as OPFOR using RF units.  One tank section would have a plow tank followed by a rollers tank.  Accompanying them would be a section of MT-LB engineers.  When the lead plow tanks clears the minefield, it would go in reverse and hit a mine somewhere along the retreat that I did not give an order to.  The roller tank would just veer off and also hit a mine.  The dismount engineers do just fine, just takes an unbelievable amount of time which in truth is realistic.  Also, I spotted front line units of tanks and IFVs get stuck, the lead unit clears fine but any of the following vehicles would get stuck as if it were in mud or it would veer off at the end of the breach.  I'll post a video of this soon, but it's a pain to watch how the AI just messes it up. 

 

Incidentally, I did not spot any problems clearing a minefield, steel beams, or wire on roads.

Edited by RedWardancer
Link to comment
Share on other sites

  • Moderators
On 4/4/2021 at 7:15 PM, RedWardancer said:

I'm setting up minefields and concertina wire in a scenario.  I know how to work with them, duplicate, change marking, and so on.  Just wondering if there is a speedier way to make them up instead of having to conjure them up and move them one by one into place.  Know how you can speed route a march order along roads?  Is there a way to do the same thing for obstacles to save time?  

 

 

 

Best thing I have found to do here is to place one wire or mine obstacle (or bunker/vehicle emplacement) in the orientation you want, then duplicate a few, slide them to be connected as desired, then place a new one and orient in a new direction, and then duplicate those (which will all be oriented in that new direction) and slide to be connected, and repeat this. This limits the number times you need to orient everything, which can be a huge pain, yes. -_-

Link to comment
Share on other sites

19 minutes ago, RedWardancer said:

Has anyone noticed a major problem with breaching operations?  I've run some tests recently as OPFOR using RF units.  One tank section would have a plow tank followed by a rollers tank.  Accompanying them would be a section of MT-LB engineers.  When the lead plow tanks clears the minefield, it would go in reverse and hit a mine somewhere along the retreat that I did not give an order to.  The roller tank would just veer off and also hit a mine.  The dismount engineers do just fine, just takes an unbelievable amount of time which in truth is realistic.  Also, I spotted front line units of tanks and IFVs get stuck, the lead unit clears fine but any of the following vehicles would get stuck as if it were in mud or it would veer off at the end of the breach.  I'll post a video of this soon, but it's a pain to watch how the AI just messes it up. 

 

Incidentally, I did not spot any problems clearing a minefield, steel beams, or wire on roads.

 

Its a bit hard to comment without seeing what's going on, but how far beyond the breach was the next waypoint / tactic?

 

If you gave them a tactic just beyond the breach, perhaps the AI is trying to achieve that and is inadvertently reversing into the breach?

Link to comment
Share on other sites

  • Members
2 hours ago, RedWardancer said:

When the lead plow tanks clears the minefield, it would go in reverse and hit a mine somewhere along the retreat that I did not give an order to.

Yeah, difficult to comment here without knowing the situation.

If there is a reliable way to replicate this - maybe you can take your scenario and delete everything from it that is not required to demonstrate the case - we will certainly use it to investigate, and as a case for our automated test runs that all future versions of Steel Beasts must execute without failing before we even pass them on to the beta team.

Link to comment
Share on other sites

I'm pretty sure what happened the first time was that I ordered the plow tank to breach through everything and too close to the edge of the obstacles, including the triple-wire fence that messed the breaching process up.  In doing so, the elements trying to pass through caused a pileup and most units just didn't want to go smoothly.  I saw a breaching lane past the minefield and steel beams followed by a short smooth gap, then another breached lane where the wires were at.  This time, in this bad quality video, I had the breaching tanks stop short of the wire, ordered the engineers to breach that, marked the lane, then ran the forces through.  No problems seen at all.  Please forgive the jumpy graphics, way overdue for a GPU upgrade once the prices fall (if ever). 

 

Now if I can just get the recording settings right...

333455537_2021-04-2719-44-57.mkv

Link to comment
Share on other sites

  • Members

I think we're better off with meaningful screenshots taken at key moments. That being said,

  • I noticed that the vehicles were occasionally driving on the spoils left and right of a breach lane. There are mines there, so don't. Don't know if you drove there manually or if the vehicles did this as a part of autonomous maneuvering?
  • Mine plows, mine rollers and concertina wire don't mix, they mess up. So, generally, avoid driving into wire obstacles with them. What you need are dismounted engineers/sappers, or vehicles with a dozer blade
  • We don't assume responsibility for deviant AI behavior (yet) if you mix too many obstacle types. I saw scatter mines on top of surface-laid AT mines (that will work) but if you created, say, a concertina wire obstacle right in the middle of an anti-tank minefield, the only universal breaching asset that you could still hope to use without manual control are dismounted engineers, and they are slow and vulnerable while they do it. There's only so many complications you can throw at the AI before "edge cases" will start to dominate the behavior. Edge cases are where a minor perturbation of starting conditions can result in a correct AI behavior, or something unexpected; the former creating the impression of competent behavior, the latter not so much. We can teach AI to not drive into water, so it'll follow a coastline. We can teach AI to avoid minefields, so it might even navigate around a minefield on a beach without driving into the water. Plant a forest on the beach with large boulders - well, at some point the complexity is too much. And I don't think the AI should be expected to solve problems that even give humans a headache in real life.
Link to comment
Share on other sites

1 hour ago, Ssnake said:

I think we're better off with meaningful screenshots taken at key moments. That being said,

  • I noticed that the vehicles were occasionally driving on the spoils left and right of a break lane. There are mines there, so don't. Don't know if you drove there manually or if the vehicles did this as a part of autonomous maneuvering?
  • Mine plows, mine rollers and concertina wire don't mix, they mess up. So, generally, avoid driving into wire obstacles with them. What you need are dismounted engineers/sappers, or vehicles with a dozer blade
  • We don't assume responsibility for deviant AI behavior (yet) if you mix too many obstacle types. I saw scatter mines on top of surface-laid AT mines (that will work) but if you created, say, a concertina wire obstacle right in the middle of an anti-tank minefield, the only universal breaching asset that you could still hope to use without manual control are dismounted engineers, and they are slow and vulnerable while they do it. There's only so many complications you can throw at the AI before "edge cases" will start to dominate the behavior. Edge cases are where a minor perturbation of starting conditions can result in a correct AI behavior, or something unexpected; the former creating the impression of competent behavior, the latter not so much. We can teach AI to not drive into water, so it'll follow a coastline. We can teach AI to avoid minefields, so it might even navigate around a minefield on a beach without driving into the water. Plant a forest on the beach with large boulders - well, at some point the complexity is too much. And I don't think the AI should be expected to solve problems that even give humans a headache in real life.

Don't underestimate the power of AI. Even games are now starting to make use of GPU capacity to perform complex machine learning that in some cases already surpasses the human ability to make decisions. For good or bad, that's the near future.

 

Link to comment
Share on other sites

  • Members

Welllllll... the problem with deep neural networks (DNNs) is that you need to present them training data to work with, and clearly defined tasks against which to optimize a solution. And then you receive a result where you can see that they are doing "things" but don't know "why". Presenting DNNs suitable training data to train the right thing is a challenge in its own. There will be cases where DNNs can be very useful, but they are not a magical panacea.

 

Classic example of DNNs being used and trained is image recognition, and they wanted to train a face recognition software in an airport situation to detect certain faces of "test terrorists" among all the airport "test visitors". So they took a lot of photos of the test visitors" and photos of the test terrorists and trained a neural network to recognize the "terrorists" with 99.99% accuracy. The live test then detected none of the "terrorists" in the morning, and produced a lot of false alarms in the afternoon.

Why?

The first set of photos (of the visitors) were all taken during the morning, the second set after lunch. The neural network then was effectively trained to look for lighting conditions matching the afternoon and completely ignored the faces. A very efficient solution to a problem that was heavily biased, but the bias wasn't recognized until after the result was undeniably rubbish.

 

 

This cannot be directly translated to the application case of training a DNN to control the behavior of computer-controlled units, of course. But it illustrates that you can't be careful enough with presenting unbiased test samples and feedback for the desired outcome. Also, it totally doesn't address what I was describing before, confronting an AI with multiple challenges simultaneously with conflicting goals. Like "Use the plow against mines" "Don't use the plow against concertina obstacles" "Don't make tight turns with the plow deployed" and then throwing it a curveball like "Here's a forest with mines and some concertina wire for good measure" and then to expect it to magically work somehow. Even with deep neural networks in use there will still be edge cases (there always are), and stacking multiple challenges into a single situation still means to multiply the number of possible edge case behaviors.

Link to comment
Share on other sites

we are trying to teach machines to think in a few years- which is evolved behavior in human beings over hundreds of thousands of years in the species (millions or billions of years of vertebrate animals) of trial and error culling out unsuccessful behaviors from successful behaviors- as you can imagine, this won't be done overnight. in fact i think to some degree in order to accomplish this, the organic machine, that is to say, human beings have to combine with machine (and in a sense that is what is happening just by virtue of teaching machine behavior) in order to bring machines even remotely into bounds of what we would consider thinking behaviors.

 

at the same time i really do marvel at what is going on in over a few years, not so much the machines themselves, but how far man has come to invent and manipulate reality. for example, in the united states (and the rest of the world, but the united states is leading the technology), in the finance world, banks and financial institutions have an obligation to report suspicious activity to government agencies. larger institutions with millions of customers and which hundreds of millions of dollars worth of transactions are processed through their networks every day cannot possibly detect money laundering and other illegal forms of moving money just with human investigators, it would never get done. machine learning has largely stepped up here in this regard- able to spot certain patterns of behaviors which may indicate money laundering or illegal sales of contraband or terrorist financing, the machines themselves flag the behaviors based on every time a known behavior occurs, they learn that much more to their algorithms; it isn't of course perfect, but it doesn't need to be- every false flag is also a learning experience for the machines, and they get that much more accurate because of it; in fact we see it also unfolding in the world of actuarial science so much to the degree that is how the insurance industry runs a successful business model, actually predicting major life events based on the statistical measurement of all similar people in a similar set of circumstances (that is, the likelihood someone's life will play out in a rather predictable manner based on their geographical location, age, race, sex, education, socio-economic background, even the color of car they own). if someone buys insurance products such as annuities or life insurance, they are betting against the insurance company that they know more about what is going to happen to them than the insurance company does; but the insurance companies through their sophisticated machine learning has all the data. of course the industry can make a bad bet here and there there, and the models aren't perfect, but they are quite good over large scales to be quite profitable; the individual case may beat their predictions, but on the macroscopic level, you see something different happen because of the way they are able to predict large trends.

 

now of course there is a difference with video games not being anywhere near something like that for various reasons, one of which is because the single video games doesn't have as much data to work with as a large scale industry such as finance or insurance, furthermore a single developer doesn't have the kinds of resources to devote to machine learning, and doesn't have the same access to feedback. however, i will say this- i'm a bit late to the game, but recently installed skyrim just a few days ago, and from my observations with some of the better community mods out there, it's rather outstanding how lifelike some of the behaviors of the actors in the game are, even about 20 hours into the game, i've not seen anything like it- and this game is from ten years ago or so

Link to comment
Share on other sites

I am certain that there are ways to get DNNs work really well and predictably.  However with a cost that our current societies are not willing to deal with because of old beliefs and stigmas.  And how could we, if we have not managed to get "human workers"  to work so.  Because aren't we as humans, the ultimate DNNs?   What works on us, would likely work on DNNs, although there may be long way to get those that far...   "bits" vs  living organism   

 

Application case of training would be awesome though.  And something similar that mimics DNNs  would be usefull (withing their limitations just like current AI in use on steelbeast)

Such could be done to with dedicated changes to engine with some cheating. Because for example height maps are on most scenarios same. So a layer / patterns could be made to control paths that units will follow.  And bit like we have drain regions,  a drain could be used to alter direction of flow of units.    Kind of cheesy cheat and it would mean some extra work on creating maps.  But thats how id do it.  Also there are some really obvious marks on terrain that atract players.  Same could be made to atract AI.  Though we run into same issue as what Ssnake just mentioned.  creative lazyness can only go so far.  But from mistake to mistake, eventually product will come better. There is hardly a progress without failures.  

 

    

Edited by Lumituisku
Link to comment
Share on other sites

4 hours ago, Lumituisku said:

aren't we as humans, the ultimate DNNs?  

 

    

well certainly it looks as though all organisms are contained in this neural network. the organism takes in information from its environment, at the same time, the environment takes information in from the individual organism; this is why the relationship between the individual and its environment is a transaction rather than mere interaction- there is an exchange of information going on all the time. a tall individual for example would not know that he is tall without some information from the environment to confirm this- for example, being in the presence of shorter people, who are confirmed to be short when in the presence of someone tall. if you apply this to everything that we might conceive, it is a connected system, we are each of us like an individual neuron in a larger brain, and so we evolve as information spreads, and you hear the word 'go viral' and this sort of thing. disruptive and innovative technologies that have developed rapidly over the last century has put us in a peculiar position of augmenting this neural network in ways that the species really had no precedent to adapt to so quickly- so things like the internet which connects everyone globally to this large system on such a scale that prior stages of humanity and animal evolution had never experienced before came with advantages, but there also came problems, which we are forced to work out on the fly without much history to go on as a guide

Link to comment
Share on other sites

  • Members
3 hours ago, Lumituisku said:

aren't we as humans, the ultimate DNNs?

Technically yes, but in every practical sens: NO.

A typical DNN has maybe 300...500 neurons. While I was a student in the mid 1990s, my friends worked with simple neural networks in the 10...15 neurons range. Sure, we have inflated the number by a factor of 20 in those 30 years now, and a layered approach has delivered very good results in specific application cases. I won't deny that. But 300 neurons represent the complexity of a primitive parasitic worm; actually, the only lifeform that has its entire neural network completely cartographed is Caenorhabditis elegans. Even a f'in jellyfish has 5600 neurons already.

The human central nervous system has about a 100 billion cells.

Just because we have an idea of how information is being processed by neurons with excitement cascades etc. doesn't mean that you have the slightest idea how the human brain in its full complexity works. Not at all. People always overlook the resulting complexity from scaling issues.

Not every neuron in the human brain is connected with every other one or else the number of connections would be 100 billion factorial, which is a ridiculously large number of orders of magnitude larger than the number of protons in the universe: https://www.wolframalpha.com/input/?i=factorial+100000000000

So, I suppose you can delete a few of its 25 million trailing zeros when trying to assess the complexity of the human brain.

 

What I'm trying to say here is that studying and using the current DNNs is so remote from the reality of the human brain as studying a single water molecule tells you about the influence of the ocean on the human culture, from sea faring to ship building, Hemingway novels, and songs like this one:

 

Link to comment
Share on other sites

  • Members

Look, I don't want to be negative. I just want to point out that there's a lot of bullshitting going around when people start talking about neural networks. There are a few amazing things that they can do. At the same time they are opaque. As a developer, you have no idea why an evolving DNNs comes to the results that it does, and if you want a different result it's not always entirely clear how you can retrain it to closer match what you want.

They are a tool in the toolbaox which can be very effective if applied to the right task. Maybe we're going to use them in the future too for specific tasks. But you can't simply throw them at every problem, they are not going to solve everything, and in any case - and this is important to remember - the neural networks are being trained by humans. Which means that humans still have a chance to fuck things up with them, be it out of ignorance, negligence, or sometimes even malice.

 

Do people still remember when Microsoft released a chat bot and 4Chan users decided to sabotage it by feeding it with every politically uncorrect trash talk they could think of, and turned the robot into a horrid thing in less than a week? There's a lesson in there, somewhere.

Link to comment
Share on other sites

4 hours ago, Ssnake said:

Do people still remember when Microsoft released a chat bot and 4Chan users decided to sabotage it by feeding it with every politically uncorrect trash talk they could think of, and turned the robot into a horrid thing in less than a week? There's a lesson in there, somewhere.

 

no idea what specifically you mean about 4chan and microsoft, but it does illustrate a point i make; i am not saying the existence of a neural network is proof of 'success' or proof of some technological singularity we are on the threshold of obtaining, nor is it the point where some kind of engineering perfection is achieved; far from it; there is a tendency to go the other direction and discount certain evidence because it's not 'perfect' nor even remotely so, therefore throw the whole idea out. that's not even what life is: we don't discount the existence of life because there is the existence of say defects in life, cancer, and so on- life implies cancer, life implies entropy and decay. those things don't disprove the existence of life, they imply its existence. the same cellular division and reproduction which supports and spreads life is the same process which causes cancer; the same processes which bring life into existence are the same forces which test organisms in their very survival; that people can turn a forum into a gutter of trash talking doesn't in itself disprove what i mean by neural networks, in my opinion for that very thing- certain elements of the culture, the network or the culture, do not want to get along with one another and present problems to sort out. and that is the whole red queen hypothesis in a nutshell- an evolutionary theory that problems are never solved. it is as if we are all on a treadmill which never stops, we never reach the finish line in a race, because it always remains out of reach- you solve one set of problems, it isn't long before more are in front of you and the finish line remains ever out of reach. this would explain why the species no longer lives in caves or trees, or the twelfth century or in the 1950s or when the first solid state devices were  being introduced. it all keeps going because there always seems to be a new problem which gets in the way, which we perceive needs being fixed. despite our efforts the carrot remains out of reach, but in the process we evolve trying anyway.

 

that machines are trained by humans is my very point- they are not altogether distinct from humans, they are an extension of them, much like the way a wave comes out of the sea, both distinct from, and yet part of the sea, man's technology is an expression of himself as you know.  i don't predict the future, but in time the distinction may become more difficult to discern assuming the species continues to co-evolve with its own technology. this isn't to say of course we're at the stage of perfection nor not necessarily anywhere near that, rather it's a process that always is ongoing and which was already happening from beginning- the exchange of DNA into replication is at its core and exchange of information which goes into successive generations and which keeps going. what i am explaining is that organic machine and non-organic machine are becoming fused together, they already are. this doesn't mean that tomorrow the species will be human cyborgs, i cannot predict when somehow technology and humans merge to make the distinction impossible to discern, but i see evidence of it underway. just like powered flight didn't come already prepackaged but with trial and error but is advancing, there is of evidence now in the way programming is not as much as a separate career field like it once was; even young children are taught and exposed to some form of coding, which certainly was going on a lot less in my generation than what is becoming de riguer in post industrial societies; in other words, it's all evolving again, and there is no way that will not have an impact on where we are going

Link to comment
Share on other sites

On 4/28/2021 at 3:04 AM, Ssnake said:

I think we're better off with meaningful screenshots taken at key moments. That being said,

  • I noticed that the vehicles were occasionally driving on the spoils left and right of a breach lane. There are mines there, so don't. Don't know if you drove there manually or if the vehicles did this as a part of autonomous maneuvering?
  • Mine plows, mine rollers and concertina wire don't mix, they mess up. So, generally, avoid driving into wire obstacles with them. What you need are dismounted engineers/sappers, or vehicles with a dozer blade
  • We don't assume responsibility for deviant AI behavior (yet) if you mix too many obstacle types. I saw scatter mines on top of surface-laid AT mines (that will work) but if you created, say, a concertina wire obstacle right in the middle of an anti-tank minefield, the only universal breaching asset that you could still hope to use without manual control are dismounted engineers, and they are slow and vulnerable while they do it. There's only so many complications you can throw at the AI before "edge cases" will start to dominate the behavior. Edge cases are where a minor perturbation of starting conditions can result in a correct AI behavior, or something unexpected; the former creating the impression of competent behavior, the latter not so much. We can teach AI to not drive into water, so it'll follow a coastline. We can teach AI to avoid minefields, so it might even navigate around a minefield on a beach without driving into the water. Plant a forest on the beach with large boulders - well, at some point the complexity is too much. And I don't think the AI should be expected to solve problems that even give humans a headache in real life.

#1, I mixed it up manual driving with the breaching tanks to eliminate as much of those pesky mines in the spoils.  Never know when the AI decides to make crazy turns inside the lane.  The company I pushed through were all AI driven.

#2, I pulled the plow/rollers away before reaching the wire.  Few things worse than an immobile vehicle in a breach lane.  

#3, I mixed the minefields up because I've noticed how AT minefields are too well organized to where some smaller vehicles can just drive right in between them.  So I made more dangerous to try and discourage that.  I didn't take overloading the game AI into consideration.  Perhaps a future "fix" is to make the minefields set up more random?  

Link to comment
Share on other sites

31 minutes ago, RedWardancer said:

Perhaps a future "fix" is to make the minefields set up more random?  

 

Except real Engineers tend to lay them in clusters or lines / rows so they know where they are - they need to potentially go out and collect them again. :)

 

That doesn't apply to FASCAM though which tends to be random.

Link to comment
Share on other sites

  • Members

Openly laid conventional anti-tank minefields would be the exception to begin with as they are, indeed, relatively easy to defeat by simply not driving on them (provided that you have good enough visibility especially for the driver; it's one thing from the external observer's position but quite another from the driver's or commander's position). Simple solution: Make the minefield "buried" and only very reckless and daring humans will drive through (and even if they do, chances are that they will blow up eventually). AI vehicles avoid (known) minefields anyway, unless set on a breach route.

 

Stacking multiple minefields on top of each other doesn't complicate the challenge for the AI. Mixing different obstacle types in close proximity is a different thing. But it would be in real life, too. So I'm not really worried or ashamed if AI vehicles can't clear these obstacles autonomously. Such cases should require human intervention. The purpose of obstacles is to make it hard to breach them, after all, so you feel tempted to bypass them (which probably funnels you into a prepared kill zone).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...