You could build up array of arrays (x and y) and use the integer positions (with some rule, like
floor the entity x and y values to integers to get the array index) and then turn it into one big feature vector. I remember doing this when writing sensor values into a learner for a self-driving car project.
The way you'd do this is first initialize your arrays and then loop over every entity, floor the x and y, and then place it into the correct array index like
my_arr_map[x_integer][y_integer] = entity_value.
Note that for the smallest map (240x160) you'll still have 38,400 positions (or a 38,400 long feature vector). Plus you'll need to scale all larger maps down to the same size. And choose how to encode each entity in the feature vector (shown as
entity_value above -- on a screen you'd have different colored pixels, so how are you going to represent an enemy ship different from a friend ship different from the different types of planet? etc.)
You could also try scaling down smaller. I am not sure, but you could try scaling things down to where 1 turn of movement at full thrust (max speed 7) is one array index away, but this would give you a 30x20 map. That might be way too small to learn on.
Let me know how well this works, if you try this. I didn't pursue it because I was thinking of higher level features to extract from the game state to pass into ML algorithms. My worry (and you might prove it wrong) is that there's too many "pixels" to really learn to control the many ships off of in a reasonable amount of training time, given that we only get win/loss at the end of the game, not a running score like in Mario or Pong.
All the best! Hope this helps!