Download the semantic parser to $SEMANTIC_PARSER mkdir daquar cd daquar wget http://datasets.d2.mpi-inf.mpg.de/mateusz14visual-turing/wholeset_human_894.zip unzip wholeset_human_894 cd train In the NIPS paper for every test image we use a knn approximation, so that for every test image we train a separate semantic parser. This is reflected in the 'train' folder, where subfolder corresponds to a test image. For instance 'train/image1010' corresponds to the test image1010. The content of every such subfolder must be replaced with the semantic parser. Let's take image1010 as a running example. cd image1010 cp -R $SEMANTIC_PARSER . cp ../../run_dcs_train.bash . bash run_dcs_train.bash This script will start training the semantic parser. Repeat the same procedure for every image folder 'image*'. Once the models are trained copy the weights in 'train/image1010/state/execs/0.exec/params.x' into 'test/image1010/state/execs/trained_params' cd test/image1010 cp ../../run_dcs_test.bash . cp -R $SEMANTIC_PARSER . bash run_dcs_test.bash This script produces runs the semantic parser and produces a log file 'test/image1010/state/execs/0.exec/log'. The answers must be next distilled from the log file and given to the WUPS score script (which can be downloaded from http://datasets.d2.mpi-inf.mpg.de/mateusz14visual-turing/calculate_wups.py).