AI

Started by Sinjin, August 04, 2023, 14:32:12

Previous topic - Next topic

Sinjin

I heard AI can do anything? Like, plug your numbers in and magic comes out. So I thought: great, I give it my inputs so it can learn from my actions, and give it all the food sources and just
the food-meter or health for example. Or you can just plug a picture in..whatever you want. Unfortunately all the examples I saw were in Phyton. Its unreadable to me and it seems quite hard to do in the end and I dont really understand if I have lets say, 5 outputs, how would I know what the output for movement or the action button or whatever is? I got stuck when it comes to the backward-function to adjust all the weights. But I got this far:
superstrict

const euler#=2.71828182845904523536'exp(1)
seedrnd 0

local ai:tai=new tai
ai.init 3,3,4,2 'inputnodes,layers,nodesperlayer,outputnodes
for local b%=0 until 15
 print "iteration:"+b
 ai.input([0.1,0.2,0.3],0)
 for local a%=0 until ai.layer.length
  ai.layer[a].printlayer
 next
 print
 ai.learn
next
input

type tneuron
  field weight#
  field output#
endtype

type tlayer
  field neuron:tneuron[]
  field bias#

  method reset(neurons%)
    neuron=neuron[..neurons]
    for local a%=0 until neurons
      neuron[a]=new tneuron
    next
  endmethod

  method set(v#[],biased#)
    bias=biased
    for local a%=0 until v.length
      neuron[a].weight=v[a]
    next
  endmethod

  method forward(fromlayer:tlayer)
    for local a%=0 until neuron.length
      neuron[a].output=fromlayer.bias
      for local b%=0 until fromlayer.neuron.length
        neuron[a].output:+fromlayer.neuron[b].output*neuron[a].weight
      next
'relu
      neuron[a].output=max(0,neuron[a].output)
'      neuron[a].output=tanh(neuron[a].output)
    next
  endmethod

  method backward(fromlayer:tlayer,v#)
    for local a%=0 until neuron.length
      local d#
      for local b%=0 until fromlayer.neuron.length
        d:+v*fromlayer.neuron[b].weight
'        d:+v/l.neuron[b].weight
      next
'print v+":"+d+":"+neuron[a].weight
      neuron[a].weight:+d
    next
  endmethod

  method printlayer()
    local t$="w:"
    for local a%=0 until neuron.length
      t:+float2str(neuron[a].weight)+", "
    next
    t:+"o:"
    for local a%=0 until neuron.length
      t:+float2str(neuron[a].output)+", "
    next
    print t+"bias:"+float2str(bias)
  endmethod
endtype

type tlayersoftmax extends tlayer
  method forward(l:tlayer)
    local ad#,m#
    for local a%=0 until neuron.length
      neuron[a].output=l.bias
      for local b%=0 until l.neuron.length
        neuron[a].output:+l.neuron[b].output*neuron[a].weight
      next
      m=max(m,neuron[a].output)
    next
    for local a%=0 until neuron.length
      neuron[a].output:-m
      neuron[a].output=exp(euler^neuron[a].output)
      ad:+neuron[a].output
    next
    for local a%=0 until neuron.length
      neuron[a].output:/ad
    next
  endmethod
endtype
type tlayersoftsin extends tlayer
  method forward(l:tlayer)
    local ad#,m#
    for local a%=0 until neuron.length
      neuron[a].output=l.bias
      for local b%=0 until l.neuron.length
        neuron[a].output:+l.neuron[b].output*neuron[a].weight
      next
'      m=max(m,neuron[a].output)
      ad:+neuron[a].output
    next
    for local a%=0 until neuron.length
      if ad then neuron[a].output=sin((1+neuron[a].output)*45) else neuron[a].output=1-rnd(2)
    next
  endmethod
endtype

type tai
  field layer:tlayer[] '[0]=input [..]=other [last]=output

  method init(inputneurons%,layers%,neuronsperlayer%,outputneurons%)
print "all neurons:"+(inputneurons+layers*neuronsperlayer+outputneurons)
    layer=layer[..layers+2]
    layer[0]=new tlayer
    layer[0].reset inputneurons
    layer[layer.length-1]=new tlayersoftmax
    layer[layer.length-1].reset outputneurons
    for local a%=1 until layers+1
      layer[a]=new tlayer
      layer[a].reset neuronsperlayer
    next

    for local a%=0 until layer.length
'      layer[a].bias=rnd(1)
      for local b%=0 until layer[a].neuron.length
        layer[a].neuron[b].weight=1-rnd(2)
      next
    next
  endmethod

  method input(w#[],biased#)
    layer[0].bias=biased
    for local a%=0 until w.length
      layer[0].neuron[a].output=w[a]
    next
    forward
  endmethod

  method forward()
    for local a%=0 until layer.length-1
      layer[a+1].forward layer[a]
    next
  endmethod

  method learn()'y#[])
    local l:tlayer=layer[layer.length-1]
    local om#=-1,op%',o%[l.neuron.length]
    for local a%=0 until l.neuron.length
      if (l.neuron[a].output>om) then
        om=l.neuron[a].output
        op=a
      endif
    next
'    o[op]=1
'    local c#[l.neuron.length]
    local loss#=-log(max(1e-7,min(1-1e-7,l.neuron[op].output)))
'    for local a%=0 until l.neuron.length
'      loss:+log(max(1e-7,min(1-1e-7,l.neuron[a].output)))*o[a]
''      loss:+log(l.neuron[a].output+1e-7)*o[a]
'      c[a]=
'    next
'    local loss#=-l.neuron[op].output
print "loss:"+loss+":"+om+":"+op

    for local a%=layer.length-1 to 1 step-1
      layer[a-1].backward layer[a],loss
    next
  endmethod
endtype

function float2str$(fl#)
  local ret$=fl
  return left(ret,instr(ret,".")+3)
endfunction

Sinjin

#1
I guess I made an error early on, its not this but that:
type tneuron
  field weight#[]
  field output#
endtype
you need weights for ever output not the input for a new layer... and i guess you dont need the output in every neuron/node it just gets passed thru...i thought if you save something in each node/layer it could make things easier....all those diiferent curves...its not 0 or 1...i guess it comes to 0 and 1's if you have millions of neurons. this is too hard for me, although the code doesnt seem too hard...i wonder... but it remains true for just shoving in values...no matter what!
in phyton it looks like 3 lines, but i guess it not more like 100 lines if you just expand everything for non-undertanders like me :D plus im not looking for 99.9999 accuracy, i just want my monkey on my litle island to "mimic" me based on my inputs. so in the end all that sigmoid stuff and exp funxtions....i want something very easy, and as i understood it can! even if you have another model it still does a good job, not a great job but if its fast and it take 4 hours to learn from me...thats fine! more than fine!
although its super fascinatging you have some code that can take anything!!!!
some ppl do the wieght in range of 0-1 other do -1 - 1, thats all free to explore for sure. or all the functions...i was thinging immediately of some sinus so its fast but maybe not so accurate...after all, all i need is my monkey to learn (not matter the time) to eat/find food :D
if thats take 5 hours, fine, you play 5 hours easily! instead of 15 minutes :D
but jeesus if you think about that: any input....and it give some output...right now im thinking it learns its own outputs bc you use that as an input already. kinda strange if i ask if its a cat or not and i have 2 or 3 outputs whats does the other outputs represent?and that would be the same as i say to the ai: hello how are you? and it just takes it...i want it over time just to say the same thing...bc there is no other data and thats what i meant before...it cannot come up with something new ever!!!!!!!!!!! as i said in my game...if i never eat  the ai will never come up with a new idea like getting food over that plant to eat whatever is gorwing there. it wil "never" happen not even in chatgpt version 20!

Sinjin

#2
i kinda say if the model has to agree with w asine wave for example why cant i use tat sinewave? i think whatever it learns...there will be a formaula for that.a much easier one! always! ppls just dont see whats a node doing bc...they dont see the sipe steps, everybody sees the final oputput. but no ai can do come up with something that never been!  its impossible for thew ai to extrapolate at all! one node cannot be better than you forse it to do so in effect it cannot learn/extrapolate whats given! if you ask chatgpt whats the answer to 1+2+3+4+5+6+7...well if it has never seen such a thing, it wont think about it. its just it cannot do! so as my undersatanding of this is: it will never teach anything about what it has never seen...maybe brain storming is cool but that i do already bu posying something here. maybe somebody else have a better/another idea....and i tak it...but the ai thinks like: oh i never seen this, so i return not seen or a lie :D  like oh yeah you have to add 1 to x and 1 to y  hahaha what a super iffencient answer :D and thats exactly what it does! ai cannot create! only humans do!
and maybe 100's of professors will imediatly disagree with me...show me an example where the ai come up with something it was never trained for....it happens never!!!!!!!!!!
no matter the sigmoid stuff or gradiant...urgs...it will never come up with something it never was shown!and indeed, without all that curved stuff it wont to anything. so i gueess its kind the floating point error what makes it learn. well at least most of the time...in my theory ai can never come up with anything that no human can think of :D
is ask chatgpt, older version i think but a=a+a/1000 and i ask if you can do the opposite, not getting slower but faster in numbers....so i guess it never recieved anything about that it couldnt answer!
but i did in my code! so it will never esceed human knowledge and it cannot come up with anything! and those open models dont learn of your inputs...it could but its disabled for the moment you are "talking" to it. we dont want it to learn bad stuff :D
so even we have "bad ppl" out there it will never learn that bc right where you and i got contact to, that stuff is already...they try go get it out of their model...they never think: its part of humankind! i would say. let it run thru! i mean let it learn freely and dont cut it off by 10%? of humans?  jeee the bad stuff comes with humans, how can you even filter that out?? knowing now that ai know ONLY what the input models does. it feels so wrong to try to filter "bad" things out....ask a fly if it was bad munching on your food  lol besides the model is helpless while the fly still wants to find food...good food ^^
im not facebook but i guess there are 20% good ppl and 80% bad?  i mean who decides that?  the ai is trained on: that should be bad...but in fact. nature survives! we are all bad...now what? ai cant make it good lol ai is nothing, nor good nor bad!and if you think a bit more...money doesnt make you happy...food and water and shelter does! ai would think now, you dont need money...which is correct, but you cant buy something...wich has no value to you...what calue would it have to ai?  none! bc it has never seen that case!even on my own thought i biase the ai! haha like backwards :D  i understand the concept, but im unable to code it and thats ge generally it! but moeny has no vaklue...only to other humans..if i think for my self: i really dont see what money makes attravtive.. i really dont and im special...but...i like to learn stuff. thats it
remeber you can shove anything in there?  surely it donr know anything. if you shove your english words in there...it doesnt "know" if thats right, it relies on your input if that action was good ^^ but who sais it the input is always bad, the ai wont know its always bad haha
if tyhe ai returnd blabla and you say it was good...it will continue in the same manner!
but still and i cannot grasp over it..it will learn..so if i feed my inputs...regardless of bad speech ahaha  it wil learn to mimic me! and thats what i want. i dont want to train it...i want to train it by my every frame comein inputs!
"howe are you ""today"" will never happen! bc it never seen the word today before!

Sinjin

#3
and for real? you still be alive...thats more important than any ai stupidness :Dai doenst learn from you, and ai cannot come up with anything new..whats it good for? you are alive! thats important!!!! ai doesnt know "alive" simple as that! i mean you can give it some known problem...i doubht (wrong word i know, im an ai learnnig here :D) it will give no meaningful answer. bc ai is so like...without your input no output comes out...it wont think on its own...no matter time, ai is unable to "think" what would it think about...its own asnwers?  there is the problem already. it will never come up with new stuff, but to me its a great tool; for barainstorming...its like another person that comes to chat...and yes, i saw the ad they do? for that i agree its a super tool!  if its able to learn to, i dont want the next session to start all over! and in my opinion i would leave all the "bad" stuff in there.thats a part what we all "are" even the attempt to filter it all out...its like i take you! and shove you into a chair but now i scan all your neutrons...not based on what you learned but now i want to adjust all your neurons in your brain ti fit/follow my rules! in fact! ai does onyl leraning from the master...you! its not dependend of what society wants!
and sry i do my thing...i dont care about misspelling...thats an exact task for ai to correct me :D and thats...im a bod coder but i think thats what it opimized for...not making new predictions, only make new preduciotns you know the values for...ai can never say: you are so good...when mit never learned/see the word "so" before haha

Sinjin

plp dont understand what ai is..i always try to explain like when you give the ai your name: steve and you ask immediately after whats my name? it always comes up with random names! ALWAYS!!!! bc its one brain against 100's whos ever talking to the ai at the moment+ it wont learn whatever you say: you can say you are stupid...it wont go backward in it mesurement(weigths) and adjust all the neruons lol
its a fixed model you speak to

Sinjin

as in my game i think to give it just my inputs and surroundings of my world...essentailly i see this so i pass this to the ai! and it will never learn if i eat nothign lol  it can only learn/use it more often bc tha\t seems to be a "good" way

Sinjin

if you teach it like die is a good path...then im sure it will learn to die as quickly as possible...with 90% accuraqcy ahahaha but you see? some acuracy is always in there. like when you say about another person. oh shes like 60% no good  lol

Sinjin

i just want over 50% so if my moneky is starting to move i would expect it to move (all based on my inputs and vlaues like health) its a good thing to move around or to get food after a while

Sinjin

#8
i teach it my inputs! with all the values with comes along with it like the health bar or food.. lol
and so far the whole network likes positive numbers. so better watch me..the player what that player does!  no need to train it...i train it on the fly! like every frame...and i dont want 99.9& accuracy...i leave that to the ai itself to solve haha(thats why i said earlier, it wont matter if your input is a whole picture or the game itself) i just want my monky to follow me if nothing better accures :D in the game if you reach like 75% thats more than enough...i would guess with 75% confidence my monkey will move around and see the food and take the food....i dont need 99% bc there was a snake in the path  loool
and in theory you even cut out 100% positive or negative nodes. when a node is 100% sure your inpout was wrong...it gets passed along with all the neurons...eben it that neuton is 100% sure, it wont matter in the network....thining about this...wount be there a way to code it like..there is an object, dont bumb into it?  and thats hard coding all along!  i meant if you have a predefined street...you cannot go ober the curbs... that ai just does what you thought of...there no new information that it has to look for so your system would be closed. it does work without ai!kinda ike to tel the ai thi is 100% and this is 0%...therefore you dont need any ai, you can detemine that from the beginning, and even better than any ai could!  like need for speed.. all theose streets and curves. i bet need tfpr speed does a better job than any ai trying it over and over!  you see? ai isnt the solutioms you seek, i mean iu can make my monkey do the exact thinkg i did but just delayted...but it wont find any food bc it would try to take the babansn i already got lol
and thats it exactly, instead  making my moneky do the exact thing, it could look for other food sources to "mimic" me..i ate. i let the ai find the way :D and moving around is always a reward...lets imagine you dodnt give it the whole map, just a view of sight. :D
im still confused about the outputs...now i gave it 2 inputs, kust me walking around..-1+1 or 0...how do i tell it thats a good thing but not always. if you ate you dont need to find anything :D
again if it return me 75% thats a good action based on what i did...thats more that i would expect and i would see that i the monkeys actions! :D
all i want is to the monkey learns lol  again if its more than 50% i would be so happy as a programmer bc that means the oneky learned from my actions :D

Sinjin

#9
but this i itself, tou feed something a bunch of numbers and a positive result comes out!  not positive but the way you want it to be! once done it always comes up with psitive numbers... and well, the longer i think i guess there is a final math solution without the ai...always! its just ppl neber came up with math to solve how to drive...the ai just tries to be more confident...you train it, and it never comes up with like driving on the sidewalk seems way better lol

Sinjin

i talk a lot sure..its a vast topic, but if i look at the code its just like 100 lines :D  super easy!

Pakz

#11
Those social simulations are interesting. The recently published one they used to create those generated Southpark episodes with. I am not ready to invest in the new ai yet. The time needed to learn and additional compute situation is not something light.

I have some ant/bee colony and genetic algorithm ideas for possible future game projects though. Maybe when a site or video pops up where they teach aimed at the intelectual level of 5 year olds(!) how to make and train your own ai I might look at it :)

Sinjin

Now I messed it up completly, haha. But I like this way better, it passes through a vector and has the weights in a matrix. Its too hard for me. When I first heard about the coding aspect, that guy said its super easy...yeah, forward is.

superstrict
'import sj.all2
const euler#=2.71828182845904523536'exp(1)
'seedrnd 0

test
const learningrate#=0.001
const dropoutrate#=0.01
function test()
  const startdraw%=20
  const maxepoche%=10000

  global nn:tlayer[4] 'init layers
  for local a%=0 until nn.length'-1
    nn[a]=new tlayer
  next
'  nn[nn.length-1]=new tlayersoftmax
  nn[0].init 2,6,new tactivationsigmoid          'input  layer, 2 nodes to 6
  nn[1].init 6,6,new tactivationreluleaky        'hidden layer, 6 nodes to 6
  nn[2].init 6,6,new tactivationreluleaky        'hidden layer, 6 nodes to 6
  nn[3].init 6,2,new tactivationsigmoid'softmax  'output layer, 6 nodes to 2

  for local epoche%=maxepoche to 0 step-1
rem
    if (epoche mod 1000=0) then
      for local a%=0 until nn.length
        nn[a].w.dropout dropoutrate
      next
    else
      for local a%=0 until nn.length
        nn[a].w.nodropout
      next
    endif
endrem
'    if (epoche mod 1000=0) then randomizedropout nn,dropoutrate
    randomizedropout nn,dropoutrate

    local invec#[]=[float(rnd(1)),float(rnd(1))]
    if (epoche<startdraw) then print "epoche:"+(maxepoche-epoche);printvector "input:",invec

    local vec#[]=copyvector(invec)
    for local a%=0 until nn.length
      vec=nn[a].forward(vec)
    next

    if (epoche<startdraw) then printvector "confidence:",vec

    invec=mean([invec[0],invec[1]],1) 'biggest value to 1, others to 0 (training data)
    if (epoche<startdraw) then printvector "mean:",invec;print
rem
    for local a%=nn.length-1 to 0 step-1
      local tvec#[]=nn[a].backward2(invec,vec)
      invec=tvec
      vec=tvec
'      invec=vec'copyvector(tvec)
'      vec=tvec'copyvector(invec)
      if (epoche<startdraw) then print "loss:"+nn[a].loss
    next
endrem

'rem
    vec=nn[nn.length-1].learn(invec,vec)
    for local a%=nn.length-1 to 0 step-1
      vec=nn[a].backward(vec)
    next
'endrem
  next
  for local a%=0 until nn.length
    nn[a].printout
  next
  input
endfunction

'-----------------------------------

function mean#[](v#[],toval#=0)
  local p%,m#=-10000
  for local a%=0 until v.length
    if (m<v[a]) then
      m=v[a]
      p=a
    endif
  next
  for local a%=0 until v.length
    if (a<>p) then v[a]=0 elseif toval then v[a]=toval
  next
  return v
endfunction

function copyvector#[](v#[])
  local ret#[v.length]
  for local a%=0 until v.length
    ret[a]=v[a]
  next
  return ret
endfunction

function printvector(t$,v#[])
  for local a%=0 until v.length
    t:+v[a]+", "
  next
  print t
endfunction

'-----------------------------------

type tactivation
  method func#(x#,m#=0) abstract
  method deri#(x#) abstract
endtype

type tactivationperceptron extends tactivation
  method func#(x#,m#)
    return (x>0)
  endmethod
  method deri#(x#)
    return (x<=0)
  endmethod
endtype

type tactivationrelu extends tactivation
  method func#(x#,m#)
    return max(0,x)
  endmethod
  method deri#(x#)
    return (x>0)
  endmethod
endtype

type tactivationreluleaky extends tactivation
  method func#(x#,m#)
    return max(0.01*x,x)
  endmethod
  method deri#(x#)
    if (x>0) then return 1
    return 0.01
  endmethod
endtype

'use when 2 outputs, use softmax if more
type tactivationsigmoid extends tactivation
  method func#(x#,m#)
    return 1/(1+exp(-x))
  endmethod
  method deri#(x#)
    return x*(1-x)
  endmethod
  method deri2#(x#)
    local t#=1/(1+exp(-x))
    return t*(1-t)
  endmethod
  method deri3#(x#)
    return 2*x-3
  endmethod
endtype

type tactivationtanh extends tactivation
  method func#(x#,m#)
    return tanh(x)
'    local x#=exp(x+x)
'    return (x-1)/(x+1)
  endmethod
  method deri#(x#)
    local t#=tanh(x)
    return 1-t*t
  endmethod
endtype

rem
type tactivationelu extends tactivation
  method func#(x#,m#)
    if (x>0) then return x
    return exp(x)
  endmethod
  method deri#(x#)
    return exp(x)/x
  endmethod
endtype

type tactivationprelu extends tactivation
  method func#(x#,m#)
    if (x>0) then return x
    return x*m
  endmethod
  method deri#(x#)
    return exp(x)/x
  endmethod
endtype

type tactivationswish extends tactivation
  method func#(x#,m#)
    return m/(1+exp(-x))
  endmethod
  method deri#(x#)
  endmethod
endtype

type tactivationsoftplus extends tactivation
  method func#(x#,m#)
    return log(1+exp(x))
  endmethod
  method deri#(x#)
  endmethod
endtype

type tactivationapproxtan extends tactivation
  method func#(x#,ap# var,b# xar)
    for local a%=0 until 5
      local d#=0.0001
      local x2#=a+d
      local y1#=a*a*2
      local y2#=x2*x2*2
      ap=(y2-y1)/(x2-a)
      b=y2-ap
    next
  endmethod
endtype

type tactivationsilu extends tactivation
  method func#(x#,m#)
    return x/(1+exp(-x))
  endmethod
endtype
endrem

'use if 3 or more outputs
type tactivationsoftmax extends tactivation
  method func#(x#,m#)
    return exp(euler^(x-m))
  endmethod
  method deri#(x#)
    return (1-x)*x
  endmethod
endtype

'-----------------------------------

type tlayersoftmax extends tlayer
  method forward#[](v#[])
    v=w.dot(v)
    local ad#,m#'=-100000
    for local a%=0 until v.length
      m=max(m,v[a])
    next
    for local a%=0 until v.length
'      v[a]=exp(euler^(v[a]-m))
      v[a]=act.func(v[a]-m)
      ad:+v[a]
    next
    for local a%=0 until v.length
      v[a]:/ad
    next
    return v
  endmethod
endtype

type tlayer
  field w:tmatrix=new tmatrix
  field b#[]
  field act:tactivation
  global loss#
'  global lastoutput#[]

rem
  method load(f:tstream)
    b=b[..readpint(f)]
    for local a%=0 until b.length
      b[a]=readfloat(f)
    next
    w.load f
    select readbyte(f)
    case 1 act=new tactivationsigmoid
    case 2 act=new tactivationrelu
    case 3 act=new tactivationsoftmax
    case 4 act=new tactivationtanh
    endselect
  endmethod

  method save(f:tstream)
    writepint f,b.length
    for local a%=0 until b.length
      writefloat f,b[a]
    next
    w.save f
    if     tactivationsigmoid(act) then writebyte(f,1)..
    elseif tactivationrelu   (act) then writebyte(f,2)..
    elseif tactivationsoftmax(act) then writebyte(f,3)..
    elseif tactivationtanh   (act) then writebyte(f,4)
  endmethod
endrem

  method init(x%,y%,act0:tactivation)
    act=act0
    w.init x,y
    w.randomize
    b=b[..y]
    for local a%=0 until y
      b[a]=rnd(2)-1
    next
  endmethod

  method forward#[](v#[])
    v=w.dot(v)
    for local a%=0 until v.length
      v[a]=act.func(v[a]+b[a])
    next
    return v
  endmethod

  method backward2#[](me#[],v#[])
'    local v2#[me.length]
    loss=0
    for local a%=0 until v.length
      v[a]=(me[a]-v[a])^2'^2'*0.5'*2
      loss:+v[a]
    next
    loss:/me.length
loss=act.deri(loss)


    for local a%=0 until v.length
      v[a]=act.deri(v[a])
    next

    local w2:tmatrix=new tmatrix
    w2.extend v
    local v2#[]=w2.dot(v)
    for local a%=0 until v2.length
      v2[a]:*learningrate*loss
    next

    w.flipto w2

'print w2.m[0].length+":"+w2.m.length+"::"+w2.dimx+":"+w2.dimy+"::"+v.length+","+v2.length+","+me.length
    local gradi#[]=w2.dot(v)

    for local a%=0 until v.length
'if not w.drop[a] then
      b[a]:-v[a]*learningrate*loss
'endif
'      v2[a]=act.deri(v2[a])
    next
    w.upd(v2)',loss)
    return gradi
  endmethod

  method learn#[](y#[],v#[])
    local v2#[y.length]
'    local m#=-maxfloat,mp%
    loss=0
    for local a%=0 until y.length
      v2[a]=(y[a]-v[a])^2'^2'*0.5
'      if (m<v2[a]) then
'        m=v2[a]
'        mp=a
'      endif
      loss:+v2[a]
    next
    loss:/y.length
'    loss=log(max(1e-7,min(1-1e-7,v[mp])))
loss=act.deri(loss)
    return v2
  endmethod

  method backward#[](v#[])
    for local a%=0 until v.length
'      v[a]=act.deri(v[a])
    next

    local w2:tmatrix=new tmatrix
    w2.extend v
    local v2#[]=w2.dot(v)
    for local a%=0 until v2.length
      v2[a]:*learningrate
    next

    w.flipto w2

    local gradi#[]=w2.dot(v)

    for local a%=0 until v.length
      b[a]:-v[a]*learningrate
      v2[a]=act.deri(v2[a])
    next
    w.upd v2',loss
    return gradi
  endmethod

  method printout()
print
    printvector "bias:",b
    w.printout
  endmethod
endtype

'--------------

type tmatrix
  field dimy%,dimx%
  field m#[][]
  field drop%[]

rem
  method load(f:tstream)
    dimy=readpint(f)
    dimx=readpint(f)
    m=m[..dimy]
    for local y%=0 until dimy
      m[y]=m[y][..dimx]
      for local x%=0 until dimx
        m[y][x]=readfloat(f)
      next
    next
  endmethod

  method save(f:tstream)
    writepint f,dimy
    writepint f,dimx
    for local y%=0 until dimy
    for local x%=0 until dimx
      writefloat f,m[y][x]
    next;next
  endmethod
endrem

  method init(x%,y%)
    dimy=y
    dimx=x
    m=m[..y]
    drop=drop[..y]
    for y=0 until dimy
      m[y]=m[y][..x]
    next
  endmethod

  method randomize()
    for local y%=0 until dimy
    for local x%=0 until dimx
      m[y][x]=rnd(2)-1
    next;next
  endmethod

  method dropout(p#)
    for local y%=0 until dimy
      drop[y]=(rnd(1)<p)
    next
  endmethod

  method nodropout()
    for local y%=0 until dimy
      drop[y]=0
    next
  endmethod

  method upd#[](v#[])',loss#)
    local v2#[dimx],m2#[][dimy]
    for local y%=0 until dimy
      m2[y]=m2[y][..dimx]
'if not drop[y] then
      for local x%=0 until dimx
'        m[y][x]:+(1/m[y][x]) * v[y] * (loss/v[y])
        m2[y][x]=m[y][x]-m[y][x]*v[y]'*loss
        v2[x]:+m[y][x]
      next
'endif
    next
    m=m2
    return v2
  endmethod

  method dot#[](v#[])
    local v2#[dimy]
    for local y%=0 until dimy
'if not drop[y] then
      for local x%=0 until dimx
        v2[y]:+m[y][x]*v[x]
      next
'endif
    next
    return v2
  endmethod

  method extend(v#[])
    dimx=1'v.length
    dimy=v.length
    m=m[..dimy]
    for local y%=0 until dimy
      m[y]=m[y][..1]
      m[y][0]=v[y]
    next
  endmethod

  method flip()
    local m2#[][dimx]
    for local a%=0 until dimx
      m2[a]=m2[a][..dimy]
    next
    for local y%=0 until dimy
    for local x%=0 until dimx
      m2[x][y]=m[y][x]
    next;next
    dimx=dimy
    dimy=m2.length
    m=m2
  endmethod
  method flipto(m2:tmatrix var)
    m2.m=m2.m[..dimx]
    m2.dimy=dimx
    m2.dimx=dimy
    for local a%=0 until dimx
      m2.m[a]=m2.m[a][..dimy]
    next
    for local y%=0 until dimy
    for local x%=0 until dimx
      m2.m[x][y]=m[y][x]
    next;next
  endmethod

  method invert()
    for local y%=0 until dimy
    for local x%=0 to y
      local t#=m[x][y]
      m[x][y]=m[y][x]
      m[y][x]=t
    next;next
  endmethod

  method mul(src:tmatrix)
    local m2#[][dimy]
    for local y%=0 until dimy
      m2[y]=m2[y][..dimx]
      for local x%=0 until dimx
        m2[y][x]=0
        for local a%=0 until dimy
          m2[y][x]:+m[y][a]*src.m[a][x]
        next
      next
    next
    m=m2
  endmethod

  method printout()
    print "matrix"
    for local y%=0 until dimy
      local t$
      for local x%=0 until dimx
        t:+m[y][x]+", "
      next
      print t
    next
  endmethod
endtype

function randomizedropout(l:tlayer[],p#)
  for local a%=0 until l.length
    if (rnd(1)<p) then
      local x%=rnd(l[a].w.dimx)
      local y%=rnd(l[a].w.dimy)
      l[a].w.m[y][x]=rnd(2)-1
      l[a].b[y]=rnd(2)-1
    endif
  next
endfunction

William

#13
i've been interested in nn for games since polyworld. today i realize polyworld was not so impressive for presentation and i have not seen nn in games much except unreal tournament. it should or could be fun to play with adding into game but thinking about it would be helpful to code from runtime.
im still interested in oldschool app/gamedev

Sinjin

i dont think it has to be so intelligent for games...i mean in some instances yes, but if my little monkey lerans, and do the stuff to survive as i do...then i think its more than enough ^^ i liked the game galapagos, its a futuristic game...from the 2000's? it was super simple, you click on that guy to do something...thats the whole ai that game has :D but it was good and it felt right. i  mean it relly learned from one input lol maybe it will learn if you do nothing. all it can do is walk...in other ai i have the same feeling, if you do nothing, there is some model already built in to those. no matter how strong.