What was your final strategy challenge?


#1

Many submissions in the final hours, perhaps some coding frustrations, what part of your strategy did you feel you needed to improve coming into the end of the competition to break into the next level?


#2

I recalled from Halite I (2016) having lots of magic numbers in my code.

This year, one of the first things I did was write a Genetic Algorithm, where the magic numbers (Hyper Parameters) were stored in text files. I had a separate folder for all combinations of map size and player count. Adding a new parameter was as simple as putting it in an Enum and providing the Genetic Algorithm a default value (this way retrofitting Hyper Parameter files as I updated the code was automatic).

When the bot lost, I deleted the file for its parameters, and when it won (or got 2nd in 4p), I used the values as a seed and created a new file with parameters within ~2% of the seed values. I kept the folder sizes to a minimum and maximum to prevent overpopulation or extinction.


#4

magic numbers… they are so wrong but why is it so so hard to give up programming with them, like a bad bf/gf easier to stay than to leave!

e.g. when my ships have >300 halite more than the adjacent opponent they run away, then they drop a few from the 10% move penalty get down to 299 and change their mind and go back. no easy solution within the if-then-else world of programming. The simple >300 magic number rule yields a huge benefit, but trying to improve it a bit more would need to develop an entire ANN multifactor gradient run away strategy or save a giant database of every historical observed combination of us vs them in ? x ? matrices.


#5

To break properly into the next level, I needed to take the lessons learned from this challenge, learn some more, and then apply them over the course of another few weeks or months of development.

One of the first problems I wanted to solve in this contest was moving ships around efficiently without colliding with friendly ships. I chose a system where all ships give weights to all five possible destinations, and then a resolution phase transforms those weights into a single set of desirable non-colliding orders.

This was quite a powerful system, and after a bit of tuning of the evaluation functions it began to perform quite well. It’s also quite flexible, and allowed for a few different types of evaluation functions where the weights have different ranges (e.g some can be positive or negative, while others are assumed to be all positive). I added features haphazardly and did a lot of testing, and I’m reasonably happy with the end result. However, Teccles and other top bots are several steps above, and I think this is partly the result of a more knowledgeable and structured approach to the problem. :slight_smile: I also think that this underlying movement resolution system became a burden in some ways, and if I approached the problem again I would want to try a different system where ships can choose a single definite destination tile.

Over the weekend before the deadline, I began work on a complete replacement for the core collector_eval function which directs much of my bot’s behaviour. This new function worked relatively poorly, and by Monday night after work I gave up on getting it to work. I had arranged the Tuesday and Wednesday off work (today is Wednesday in my time zone, the deadline was 4pm locally) and have spent the past day and a half frantically working out which set of features appears to perform best against the current pool of opponents.

Local testing was of little use to me by this point, so I was uploading a new version every half to one hour over many hours. My final submission has most of the optional extra features switched on in some form (e.g inspiration management in 2p games, which always performed badly locally but seems to work well in its latest form against real opponents).

Thanks to all contestants and to everyone who contributed to running the contest! It has been one of the best AI programming contests I’ve participated in (first one I’ve enjoyed as much as PlanetWars and Ants).


#6

good thinking weighting all 5 moves from day 1. as I think you mentioned, found in a micro ship-up 2 step scoring->evalution process, it was hard to prevent ships with long term goals from senselessly changing directions, and hard to make them act as a group when needed. Was reluctant to add ship states to an obviously stateless game.

I saw some lower ranking bots have amazingly efficient and organized top-down halite gathering strategies in the early game.


#7

(oh, damn! This was not meant as a direct reply to @Salticid’s post.)

I stumbled over the Halite Challenge in December and was immediately smitten :heart_eyes:
I had to start from scratch and many features/strategies entered and left the code base. I plan to create a more detailed write up in the near future about the various ranks achieved with each feature set.

The central concept of my bot is a pheromone simulation. Pheromones spread by diffusion over the game map and evaporate slowly when not replenished. They are used by individual ships for navigation and decision making. My initial idea was to have the ships behave like ants, leaving a pheromone trail while carrying yummy halite back to the base. However, I soon realized that my bot could do better. Ants do this because they lack global information about their environment, but in Halite we have all the information we need. So in addition to letting the ships emit pheromones I made the environment emit pheromones. In particular, each map cell emits pheromones based on the amount of halite it contains.

Now, the pheromone concentration is basically a smoothed game map. In contrast to a simple smoothing kernel it has the advantage that it is easy to inject temporary information. Want the ships to go somewhere? Inject a pheromone spike at the location, which evaporates over time. This leads to the final crucial decision:

The day before finals my bot was stuck at rank 70-90. I had some opportunistic battle mechanics in place: if one of my ships ended up next to an opponent ship it would attack if certain criteria were met. The last change that got me to rank 52 before the start of the finals was almost trivial… I made opponent ships emit pheromones based on the amount of halite they carry. My ships started to hunt juicy opponents instead of waiting for them to chance by.


#8

My bot seemed to do better in the middle to end part of the game and particularly better in 4 player games. Was around 30s and 40s for some time. Last few days I tried to solve my opening phase issues, particularly the bot settling into local maxima when selecting tiles to harvest instead of going far enough to find more lucrative ones.

Changed the evaluation function from heuristics based into more mathematical. It turned out the tile collection can be done fairly well by calculating the amount of halite per turn that can be harvested on average, if a particular tile is chosen. This is a function of, amount of halite the ship can take, distance to the tile, distance to the nearest drop off and the amount of halite in the tile (how many turns taken to harvest up to a certain amount of halite in that tile). This improved the opening phase noticeably but added a new middle-end game problem which affected the 4 player games particularly hard (note that these used to be the exact strong areas of the existing version). Problem was too many ships going after the same tiles and same direction and get too affected by sudden appearances and depletion of halites in the map. Lot of fine tuning and testing were needed.

In hindsight, just could have used both code versions together. Use one for the opening phase moves and just switch to the other after that stage. This easier fix, I did out of interest soon after the competition submission and now kicking myself looking at the the newer bot beating both older versions. (one of them is the one deservedly getting beaten right now in the competition.) :slight_smile:

Overall, a great experience and lot of fun taking part in this and special thank for the wonderful job done by good folks at Two Sigma! Thanks for all smart participants, it has been a pleasure!


#9

I did the exact same thing as Ender. Asked each ship “within a 20 square radius, which square could you collect from and return to base the fastest” then ordered them by fastest to slowest and moved them to targets. To prevent them from all going to the same spot, I deducted the ships missing halite from the target area and recalculated any affected ships.

Ran into the same issue as you though, it performed terribly when the standard deviation drop below a certain value. So I tested a map where the bot played by itself, and wrote a 2nd collect logic specifically for low standard deviation, and switch to that logic when the deviation drops below a threshold.

I found the low standard deviation logic was harvesting ~83% of a 40x map. The “turns to return” logic harvested about ~86%. When combined they harvested about 91%.


#10

Interesting reading teccles write up in the post-mortem, his methodology is pretty much 99% the same as everyone else was using. He must have been really efficient at debugging all details and fixing the misplays. Halite 3 was won by the best coder rather than the best machine learner this time. Java is a good language for debugging.

Interesting there wasn’t much chatter about the techniques that work in computer go like looking ahead/monte carlo tree search and probability of opponents moves.

Reached 40s for a bit, I code c++ all day at my quant trading job, so wanted to use Javascript to stay fresh in the language, but JS esp nodejs does not excel in debugging or refactoring (eclipse ide java refactoring is amazing!). JS also does not have native matrix calculations like python, think to solve this game alpha-go style one would need to put all the logic and constraints and probabilities into matrices.

This game has made me think more about how to put procedural if-then-else loop logic into data structures. The common ML techniques out there only work for really narrowly defined problems, and anyone know why the magical alpha go DNN+Q-Learning algorithm hasn’t worked on halite 3? Following the AI headlines one can just turn on Tensorflow and problem solved?

Anyways indeed thanks to 2 sigma for putting this game together, a great looking and easy to use interface and support for running ‘virtually’ every popular programming language is an amazing accomplishment!