Return to “General”

Post

SETI, Protein Folding, LT AI Constants??

#1
From Devlog 4th Feb 2014
But how much, exactly? :think: I believe that this is one of the most fundamental constants in the game. But is there any way to derive it? Not sure, short of gathering data from loads and loads of hours of simulation.
In today's devlog Josh mentions that he has pulled the AI away from the graphical side of the game and that brute forcing of the constants is one way to generate appropriate values for the AI.

Every few days there are posts sayng 'I missed the kickstarter, what can I do to help?'

How much work would it be to make a stand alone AI that can run in the background and send reports back to LT-Homebase (probably inside an extinct volcano)? This way the AI could be run through hundreds of hours of simulation.

I'm sure the folks that recieved the prototype would be willing to help out if the software is to remain a little more in house.

However, based on what I have read so far, I have a feeling that Josh would much rather find an elegant solution rather than just throw CPU cycles at it but I would be willing to run an LT-AI program without hesitation.
Post

Re: SETI, Protein Folding, LT AI Constants??

#2
i didn't really get the tie in with the "people still want to back this...etc."

BUT I like this idea, it's cute and collective computing does sound rather nice. but from what I can remember Josh's simulation (history, while loading the game/ background simulation) Has already gotten pretty efficient and as we know, there is not much that he loves more than simple efficiency :P

Also How would you imagine the collective simulation computing would work for all the thousands different universal seeds of each LT Universe?

Still big fan of you, putting SETI in the headline :D

~Komu :squirrel:
Image
Post

Re: SETI, Protein Folding, LT AI Constants??

#3
Komurin wrote:i didn't really get the tie in with the "people still want to back this...etc."

BUT I like this idea, it's cute and collective computing does sound rather nice. but from what I can remember Josh's simulation (history, while loading the game/ background simulation) Has already gotten pretty efficient and as we know, there is not much that he loves more than simple efficiency :P

Also How would you imagine the collective simulation computing would work for all the thousands different universal seeds of each LT Universe?

Still big fan of you, putting SETI in the headline :D

~Komu :squirrel:
as i understand it its only to provide the constants needed for AI balancing, and these are the same in all seeds
Post

Re: SETI, Protein Folding, LT AI Constants??

#5
Relative value ... wonder where I heard of that before.

Anyways, this is a problem that markets can solve, after all:

"The purpose of a financial system is to solve a collective optimization problem whose solution we cannot guess a priori." - Seeking alpha

So here's the proposal
1. Base EVERYTHING off of credits - discovery, manufacturing, research, combat, reputation
2. Set the goal to be the maximization of credits per unit time
3. Let the AI do its work
4. To avoid local minima, do perturbations of the system in a Poisson-like distribution.

If the AI is robust enough, you'll eventually see activity, competition (because many entities find the same solution), which leads to diversification (because of supply and demand), AI-driven exploits (because those are just more market opportunities), reputation and status (because those open up more opportunities for more credits/time), etc etc.

Relatively top-down approach to doing this.
Last edited by jimhsu on Tue Feb 04, 2014 3:11 pm, edited 1 time in total.
Post

Re: SETI, Protein Folding, LT AI Constants??

#6
jimhsu wrote:Relative value ... wonder where I heard of that before.

Anyways, this is a problem that markets can solve, after all:

"The purpose of a financial system is to solve a collective optimization problem whose solution we cannot guess a priori." - Seeking alpha

So here's the proposal
1. Base EVERYTHING off of credits - discovery, manufacturing, research, combat, reputation
2. Set the goal to be the maximization of credits per unit time
3. Let the AI do its work
4. To avoid local minima, do perturbations of the system in a Poisson-like distribution.

Relatively top-down approach to doing this.
the problem is HOW to value everything, how to value abstract things in credits.
exploration, reputation, how do you pin a value on this things?
Post

Re: SETI, Protein Folding, LT AI Constants??

#7
Discounted cash flow?

All activites, after all, have a time value of money associated with it -- exploration opens up new ores/derelicts/etc, combat rewards you with loot and reputation (which translates into salary in future), mining is directly related to the market price of ores.

Using some sort of global optimization algorithm or GA, take an activity and parse all the possible consequences up to some depth (depending on how much computational power you have). Determine the value of credits obtained. Discount that value by the time (estimated) that it takes for you to do said activity. There, you have the present value.

Insanely hard to do? Yes. But LT has never backed down from impossible challenges, so far.

If it helps, this does not have to be done with every single ship (even a supercomputer probably wouldn't be able to handle that). But to get initial values ... this is a start.
Post

Re: SETI, Protein Folding, LT AI Constants??

#8
First of all: Hello everyone! ;)

I've been reading on these forums for quite a while and I guess I finally decided to participate^^

Anyways on to the point: If I remember the pertinent dev logs correctly, I think LTs AI already has algorithms similar to what jimhsu proposed incorporated in its code. But as Cornflakes pointed out, it's difficult to have the AI estimate things it has never done before and has no information about (like exploring a new system).

So how about this:

Why not tie behavioral options in closer with the AI's personality? I'm assuming we already have "aggression" as a personality trait. What is different about an aggressive approach (e.g. robbing people) compared to a small trade approach (e.g. mining and/or trading)? While the station you are selling to might not allow you to make good or any profits for any specific run, you will be relatively safer (from law enforcement or other NPCs) in regards to survival whereas if you rob people those people might just blast you to kingdom come with crazy-strong missiles you didn't know they had or call their mighty friends or the police on you.
Exploration is similar to this: you can't know for sure whether there will be a clan of war-mongering, throat-cutting AI in the next system or simply nothing of value at all.
Since self-preservation should be one the AIs main goals, why not implement a personality-related threshold for risky behavior? Most AI would not be great risk-seekers (as with us humans :) ) and therefore stay in the same system and do the same things (like mine and trade); if they are successful, why do something else? Their high-risk personality traits (like "aggression" and "exploratory drive") would not be high enough to exceed this "threshold" and therefore would not even be considered.
Now, if they are not making any money this way and/or someone else just robbed them because there isn't enough ore for all or for whatever reason, this "traumatic experience" could cause the AI to alter its personality traits by a bit, thus increasing the "aggression" or "exploratory drive" traits, which in turn could cause one of them to exceed the threshold and become an option. Since both exploration and aggressive behavior are things that require long term planning and some experience if they are to have a decent chance to be successful, maybe they shouldn't be constantly compared to other activities in terms of monetary influx, but rather have a certain amount of time "reserved" for them; i.e. the NPC will periodically "roll" a die whether to pursue risk seeking behavior like aggression or exploration. If both traits are below the threshold, the result will always be zero and the NPC will never do this, but if the traits have larger values associated with them, there will be a chance to usher in a period of aggressiveness or exploration for the NPC, after which it will evaluate its success and increase or decrease the pertinent personality-trait-value accordingly. After that it can continue with non-risk behavior until it "rolls the die again" or an opportunity presents itself (since now the NPC has experience with risky behavior and can estimate returns).
Some more things that should be incorporated alongside this: the length of this time interval will be inversely proportional to the magnitude of the risky-personality-trait-values. That way a major war lord will tend to wage war a lot more if he is successful with that strategy. In the non-risk phases he will hire more ships or trade or build more weapons for himself OR.....CONTINUE RAIDING :thumbup: since now that he has some experience in that matter why not keep on doing it? That means AI should be able to conduct risky operations outside the "risky intervals" but only if they have experience in that field. Assigning monetary value to such an operation will become easier then, since the AI now has experience to base estimations on. Lastly, having lots of success / trading partners in the area could raise the "risk-threshold" to discourage losing them.
Bear in mind that "aggression" should not dictate an NPCs willingness to defend itself! It might have an influence on the extend to which the NPC will fight back or rather flee but if it is much slower than its attacker then it should fight back even when it has very low aggression (self preservation has to be the main goal).


TL;DR

Anyways, this is just an idea to avoid setting arbitrary values for exploration and raiding by periodically having a chance to engage in them for a while based on personality traits.
Most NPCs wouldn't do this; a "traumatic experience" can cause them to "consider" the option (when they roll a die periodically).
If they did these things before (i.e. have experience) they can consider them as part of their normal routine as well, since it is now possible to estimate the monetary gain based on previous exploits.
Post

Re: SETI, Protein Folding, LT AI Constants??

#9
If everything is procedural in the universe, then simply let the AI learn what activity is valuable and what isn't (as time passes WHILE we play the game). Most values etc don't need to be predetermined before the game starts. The universe can learn as needs are discerned and activities are acted out to fulfill needs then value will be added to such activities naturally.

Let each individual AI decide for itself what is a valuable use of time and resource FOR HIM. If another system hasn't been found in the history of the populous yet, then they won't know if it's valuable or not. Only until one is found and the implications of finding it are slowly realized over the course of time, will they put a value on it and possibly decide it's profitable to find more. And it will have a different value for each individual.

Things MAY stagnate, but there's usually natural factors that push a population to expand or change. If one guy learns that searching for ore is profitable, then at some point when all ore is exhausted or owned by others, he will search outside the system if he is adventurous or desperate enough (or turn to piracy... hehe). This may take into account the personality types of the individual -- as Bloodthorn outlined -- with the exception of periodic die roll. I think the AI should always act out of motivation and need and not a periodic die roll to add change etc. (just my opinion)
Last edited by Chromasphere on Tue Feb 04, 2014 5:17 pm, edited 2 times in total.
Post

Re: SETI, Protein Folding, LT AI Constants??

#10
Jimhsu has exactly the right idea on the theory that should probably drive NPC behavior. I brought it down to a more tactical level in my suggestion thread. The marketplaces for things like information will have to be carefully constructed, but if done correctly, they can and will function, and drive intelligent NPC behavior.
Spacecredentials: looks at stars sometimes, cheated at X-Wing vs TIE Fighter, killed a titan once.
Post

Re: SETI, Protein Folding, LT AI Constants??

#11
Chromasphere wrote:If everything is procedural in the universe, then simply let the AI learn what activity is valuable and what isn't (as time passes WHILE we play the game). Most values etc don't need to be predetermined before the game starts. The universe can learn as needs are discerned and activities are acted out to fulfill needs then value will be added to such activities naturally.

Let each individual AI decide for itself what is a valuable use of time and resource FOR HIM. If another system hasn't been found in the history of the populous yet, then they won't know if it's valuable or not. Only until one is found and the implications of finding it are slowly realized over the course of time, will they put a value on it and possibly decide it's profitable to find more. And it will have a different value for each individual.

Things MAY stagnate, but there's usually natural factors that push a population to expand or change. If one guy learns that searching for ore is profitable, then at some point when all ore is exhausted or owned by others, he will search outside the system if he is adventurous or desperate enough (or turn to piracy... hehe). This may take into account the personality types of the individual -- as Bloodthorn outlined -- with the exception of periodic die roll. I think the AI should always act out of motivation and need and not a periodic die roll to add change etc. (just my opinion)
fatmop has outlined a nice scheme in his suggestion thread, which, combined with my idea of a "changing personality",is pretty congruent with your post I think.

Please note that I have since changed my mind a bit about the threshold; now, in my opinion, it is not necessary for exploration, but should still be employed for the subset of (activities involving violence )AND( NPCs that have never employed violence before), since I think it is too hard to calculate potential rewards of violent behavior without any experience to base estimations on.

I will probably be posting in fatmops thread regarding this from now on, so as to not "double post" about new ideas.
Post

Re: SETI, Protein Folding, LT AI Constants??

#12
scousematt wrote:From Devlog 4th Feb 2014
But how much, exactly? :think: I believe that this is one of the most fundamental constants in the game. But is there any way to derive it? Not sure, short of gathering data from loads and loads of hours of simulation.
In today's devlog Josh mentions that he has pulled the AI away from the graphical side of the game and that brute forcing of the constants is one way to generate appropriate values for the AI.

Every few days there are posts sayng 'I missed the kickstarter, what can I do to help?'

How much work would it be to make a stand alone AI that can run in the background and send reports back to LT-Homebase (probably inside an extinct volcano)? This way the AI could be run through hundreds of hours of simulation.

I'm sure the folks that received the prototype would be willing to help out if the software is to remain a little more in house.

However, based on what I have read so far, I have a feeling that Josh would much rather find an elegant solution rather than just throw CPU cycles at it but I would be willing to run an LT-AI program without hesitation.
Wow, that's actually a pretty interesting idea I had never considered...leveraging the fanbase for CPU cycles :monkey: :ugeek:

Not exactly sure how that would work yet, but if I ever do decide that I need huge amounts of data, for example, in the "balancing" phase of development, that might be a fantastic way to do it. Just let hundreds of people run a console version of LT and then send me the logs so I can have a look at how the different universes are progressing :geek:

Thanks for the idea!
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: SETI, Protein Folding, LT AI Constants??

#13
JoshParnell wrote:Wow, that's actually a pretty interesting idea I had never considered...leveraging the fanbase for CPU cycles :monkey: :ugeek:

Not exactly sure how that would work yet, but if I ever do decide that I need huge amounts of data, for example, in the "balancing" phase of development, that might be a fantastic way to do it. Just let hundreds of people run a console version of LT and then send me the logs so I can have a look at how the different universes are progressing :geek:

Thanks for the idea!
If feasible, you could tie this in with the idea about using genetic algorithms (link). If you set up test simulations, express what end-states of the simulation count as "balanced" as a fitness function, and specify what parameters should be variable, you could use genetic algorithms to automatically do the game balance, and then ship all that functionality out to us to run on our computers and then feed you back the results. It'd be complicated to set up, I guess, but it might make balancing easier in the long run.

Online Now

Users browsing this forum: No registered users and 18 guests

cron