Galaxietool Bug Tracker

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000590Galaxietool(No Category)public2012-02-13 12:002013-06-17 00:05
ReporterOmar Hawk 
Assigned ToeX0du5 
PrioritynormalSeverityminorReproducibilityalways
StatusassignedResolutionopen 
Product Version5.0 
Target Version5.0Fixed in Version 
Summary0000590: Impossible to search for empty positions (colony search)
DescriptionCurrently, you can't search for empty positions in database search, which is possible in the previous version (4.9.2).

I created this ticket so we do not forget to make this possible again in 5.0
TagsNo tags attached.
Browser and Browser Version
Hoster
MySQL Version
PHP Version
rev
Attached Files

- Relationships

-  Notes
(0001438)
eX0du5 (manager)
2012-02-13 21:45

Technically no longer possible with the new DB storage.
Not existing planets are no longer stored in the database. A proper "where not exists" query did not yet came to my mind :-(
(0001615)
LordMike (reporter)
2013-01-30 14:16

I've made a makeshift solution, it involves creating a new table with some static data (all the possible planet locations in the universe), and the performing a join between that and the galaxy.

The table has been created as such:
CREATE TABLE `possible_locations` (
  `galaxy` int(2) NOT NULL,
  `system` int(3) NOT NULL,
  `planet` int(2) NOT NULL,
  PRIMARY KEY (`galaxy`,`system`,`planet`)
)

I then insert all possible locations:
INSERT INTO possible_locations VALUES (0, 0, 1), (0, 0, 2), .....

(you can - of course - generate this at runtime, or even with .sql files as a change script ;))

Once the data is there, I can do the following:
SELECT possible_locations.* FROM possible_locations
    LEFT OUTER JOIN galaxy ON galaxy.galaxy = possible_locations.galaxy AND galaxy.system = possible_locations.system AND galaxy.planet = possible_locations.planet
WHERE galaxy.galaxy IS NULL

This gives me each and every location that is free. Filtering can of course be applied.

Notes:
* This can be optimzied by introducing a view with the above query, and marking it as persistent (if that's even possible in MySQL)
* I've noticed OGame has properties such as "destroyed planet"? - this will have to be taken into account.

* A different strategy, to solve a caching / performance problem, would be to change the meaning of 'possible_locations' to no longer be static. Now it instead contains all free planets. As before, we add all the basic data (75 000 rows), and then we delete all those that are not free.

We now have all the free locations. We then add triggers to the 'galaxy' table, to support INSERT's and DELETE's, which do the corresponding opposite action in the 'possible_locations' table. A DELETE in 'galaxy' INSERT's a row in 'possible_locations' and an INSERT in 'galaxy' DELETE's a row in 'possible_locations'.

(I personally like the last one best)
(0001616)
LordMike (reporter)
2013-01-30 14:29

To follow up on the last idea. The points are:

* Create table 'possible_locations' as above
* Insert base data into it
* Add the triggers below
* Delete the existing planets from it (a one time refresher):
DELETE FROM possible_locations WHERE
EXISTS(SELECT * FROM galaxy WHERE galaxy.galaxy = possible_locations.galaxy AND galaxy.system = possible_locations.system AND galaxy.planet = possible_locations.planet)

The two triggers I envision:

// After DELETE (on 'galaxy')
BEGIN
    INSERT INTO `possible_locations` SET
        possible_locations.galaxy = OLD.galaxy,
        possible_locations.system = OLD.system,
        possible_locations.planet = OLD.planet;
    END

// After INSERT (on 'galaxy')
BEGIN
    DELETE FROM `possible_locations` WHERE
        possible_locations.galaxy = NEW.galaxy AND
        possible_locations.system = NEW.system AND
        possible_locations.planet = NEW.planet;
    END

The effects are:
You can now use 'possible_locations' as the list of free planets all the time, doing selects into it. You might even add an index to the three columns seperately if performance is an issue (as users often search for ranges of planet/system/galaxy locations).

Note:
There is still the issue of 'recently destroyed' planets. Maybe an UPDATE trigger?
Also. I haven't added error checking, so you should probably include an "ON DUPLICATE KEY" handling to the DELETE (and possibly future UPDATE) trigger(s).
(0001617)
eX0du5 (manager)
2013-02-01 20:48

Thank your for sharing your ideas!

But we are looking for a solution without the need to introduce another table with 75k rows. We had this solution in the past and wanted to reduce the amount of data in our database to make all queries faster and to reduce the space needed for the Galaxytool itself.
We are aware of this quite simple and correct solution but are still looking for a solution which does not require such a large database footprint for "dummy data".
(0001621)
LordMike (reporter)
2013-02-01 22:31

What if we reduce the column sizes?
I can get it to 3 bytes pr. row, totalling 225K pr. setup.

If it were possible to design a single query returning a series of numbers, one could do this without a table.
(0001622)
LordMike (reporter)
2013-02-01 23:28
edited on: 2013-02-01 23:40

Maybe a table with numbers from 0 to and inclusive 500. This table can then be used for selects to generate the list of would-be planets.

I've generated an example query:
SELECT
    galaxies.number AS galaxy,
    systems.number AS system,
    planets.number AS planet
FROM
    numbers500 AS galaxies,
    numbers500 AS systems,
    numbers500 AS planets
WHERE
    galaxies.number BETWEEN 1 AND 10 AND
    systems.number BETWEEN 1 AND 499 AND
    planets.number BETWEEN 1 AND 15 AND
    NOT EXISTS (SELECT 1 FROM galaxy WHERE galaxy.galaxy = galaxies.number AND galaxy.system = systems.number AND galaxy.planet = planets.number)

It clocks in at just around 1 second (0.05s with query cache), and requires the numbers table with 500 numbers to select from.

(0001623)
LordMike (reporter)
2013-02-01 23:37
edited on: 2013-02-02 00:11

The above can be implemented as a view which will allow optimizing the query.
Applying WHERE filters to a query should make it possible to utilize the MySQL query cache.

EDIT:
Doing this, I receive awesome perfomance :)
Using a VIEW I've been able to do random queries, with response times like 0.024s.

This method then uses (according to MySQL itself) 16KB (14 times less than 255K).

(0001624)
eX0du5 (manager)
2013-02-08 22:35

Please do not use the query cache for performance measurements as you cannot be sure that the data is already in memory. Especially in my hosting environment.
The query looks as if it would trigger 75k subqueries which would be the reason for the 1 second response time.
I did not use views so far to allow the installation on existing 4.x mysql machines. Unfortunately there are still some out there.
I remember the trouble I had when I introduced trigger or sub-queries (4.1).
I mean the feature was a nice side effect based on the bad database layout I had in the previous version. Now we try to emulate the functionality by simulating that old bad layout.
I did not get complains about the missing feature so far via email or in our chat. Some asked why it was gone but that was it.
Currently I am considering to deliver this functionality as a plugin for the Galaxytool which can be installed by anybody who requires it - but not at my hosting solution.
(0001651)
LordMike (reporter)
2013-06-04 00:59
edited on: 2013-06-04 01:26

Doing it as a plugin would be cool, so that users at least have a choice on the memory tradeoff.

Another option I considered recently was simply iterating planets in the PHP application. So when searching for colonizables in galaxy 1 in positions 8-10, the following query could be run:

SELECT system, planet FROM galaxy WHERE galaxy = 1 AND planet BETWEEN 8 AND 10 ORDER BY system, planet

So now we have an iterator giving us all the colonized planets in our search range, what's left now is iterating them. In pseudocode:


var expected = {1, 8} // system, planet

for each (row in queryResult)
{
    // If we got a planet out other than the expected, this means the expected was colonizable
    // Keep doing the loop below till we hit the planet we got out (which in turn isn't colonizable)
    while (row != expected)
    {
        // Colonizable
        
        // Increment the expected counter
        // Remember to wrap around. So for our query: 1,8 -> 1,9 -> 1,10 -> 2,8 ..
        expected = expected + 1
    }
    
    // Increment the expected number every time to accomodate for the next sql row
    expected = expected + 1
    
    if (exected.system >= 500)
    {
        // Exit the loop and query
        break;
    }
}

(0001652)
xanas (reporter)
2013-06-09 01:20

While using in-memory table it's really fast, even after dropping cache and restarting server, and memory footprint is less then 2MB
Warning: data from MEMORY tables(not tables themselves) gets deleted on server restart

DROP TABLE IF EXISTS vgalaxy;
CREATE TABLE vgalaxy ENGINE=MEMORY AS SELECT DISTINCT galaxy from galaxy ;
DROP TABLE IF EXISTS vsystem;
CREATE TABLE vsystem ENGINE=MEMORY AS SELECT DISTINCT system from galaxy ;
DROP TABLE IF EXISTS vplanet;
CREATE TABLE vplanet ENGINE=MEMORY AS SELECT DISTINCT planet from galaxy ;

DROP TABLE IF EXISTS vPosible;
CREATE TABLE vPosible ENGINE=MEMORY SELECT galaxy,system,planet from vgalaxy CROSS JOIN vsystem CROSS JOIN vplanet ;
INSERT INTO vPosible SELECT galaxy,system,planet from vgalaxy CROSS JOIN vsystem CROSS JOIN vplanet ORDER BY galaxy,system,planet;

SELECT vPosible.* FROM vPosible LEFT OUTER JOIN galaxy ON galaxy.galaxy = vPosible.galaxy
 AND galaxy.system = vPosible.system AND galaxy.planet = vPosible.planet
 WHERE galaxy.planet IS NULL
 ORDER BY galaxy,system,planet;

DELETE from vPosible where (galaxy,system,planet) IN (select galaxy,system,planet from galaxy);
(0001656)
eX0du5 (manager)
2013-06-16 23:50

Hehe, in-memory database table creation on the fly ;-)
But you should measure the time for everything: create, query, delete
I cannot create them forever for example on my hosting server with around 1500 installations ;-) they would flood the server memory - for a scenario which is rarly used compared to other scenarios.
(0001657)
xanas (reporter)
2013-06-17 00:05

On clean DB it's 1s in total, with dropping still <1.5s.
As it's rarely used(but sometimes pretty useful) maybe make it manual, like imports from API?( API importing is even more time consuming)

- Issue History
Date Modified Username Field Change
2012-02-13 12:00 Omar Hawk New Issue
2012-02-13 12:00 Omar Hawk Status new => assigned
2012-02-13 12:00 Omar Hawk Assigned To => eX0du5
2012-02-13 21:45 eX0du5 Note Added: 0001438
2013-01-30 14:16 LordMike Note Added: 0001615
2013-01-30 14:29 LordMike Note Added: 0001616
2013-02-01 20:48 eX0du5 Note Added: 0001617
2013-02-01 22:31 LordMike Note Added: 0001621
2013-02-01 23:28 LordMike Note Added: 0001622
2013-02-01 23:37 LordMike Note Added: 0001623
2013-02-01 23:40 LordMike Note Edited: 0001622 View Revisions
2013-02-02 00:06 LordMike Note Edited: 0001623 View Revisions
2013-02-02 00:08 LordMike Note Edited: 0001623 View Revisions
2013-02-02 00:11 LordMike Note Edited: 0001623 View Revisions
2013-02-08 22:35 eX0du5 Note Added: 0001624
2013-06-04 00:59 LordMike Note Added: 0001651
2013-06-04 01:09 LordMike Note Edited: 0001651 View Revisions
2013-06-04 01:26 LordMike Note Edited: 0001651 View Revisions
2013-06-04 01:26 LordMike Note Edited: 0001651 View Revisions
2013-06-09 01:20 xanas Note Added: 0001652
2013-06-16 23:50 eX0du5 Note Added: 0001656
2013-06-17 00:05 xanas Note Added: 0001657



Copyright © 2000 - 2017 MantisBT Team
Powered by Mantis Bugtracker