Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Participant %12: Team COINSE, KAIST #28

Open
chenzimin opened this issue Jun 27, 2018 · 7 comments
Open

Participant %12: Team COINSE, KAIST #28

chenzimin opened this issue Jun 27, 2018 · 7 comments
Labels
participant Participant of the CodRep-competition

Comments

@chenzimin
Copy link
Collaborator

chenzimin commented Jun 27, 2018

Created for Team COINSE (Gabin An, Shin Yoo) from KAIST, South Korea, for discussions. Welcome!

@chenzimin chenzimin added the participant Participant of the CodRep-competition label Jun 27, 2018
@agb94
Copy link

agb94 commented Jul 4, 2018

Hi, you can check our program for intermediate ranking at here. I just invited @chenzimin and @monperrus as collaborators :)

@chenzimin
Copy link
Collaborator Author

Hi

To ensure fairness, I will use the latest commit before 2018-07-04 23:59 CEST for the intermediate ranking.

@agb94
Copy link

agb94 commented Jul 16, 2018

Hi, I have a question.
Is the score you've posted Average Line Error?

Total files: 17132
Average line error: 0.11328093683827162 (the lower, the better)
Recall@1: 0.8841349521363531 (the higher, the better)

This is the result of our program on the dataset 4, and our posted score is 0.0884xx.

@agb94
Copy link

agb94 commented Jul 16, 2018

Oh, sorry. That was the wrong program :( There's no problem at all.

@agb94
Copy link

agb94 commented Jul 16, 2018

I just ran my real program on Dataset 4, and here is the output:

Total files: 17132
Average line error: 0.08635175672664033 (the lower, the better)
Recall@1: 0.9124445482138688 (the higher, the better)

The loss value is slightly different from the posted score.
The program does not involve any randomness.
I used the evaluate.py file from the latest commit.
Could you check it? :)

@chenzimin
Copy link
Collaborator Author

chenzimin commented Jul 16, 2018

Hi,

I run your program again and got the same result as before:

Total files: 17132
Average line error: 0.0884776175201 (the lower, the better)
Recall@1: 0.910226476769 (the higher, the better)

Can you check the md5 checksum of predictor.py? Just to be sure that we have the same version. My version (pulled from your repo)

43a98cc7afc3c3c494ee670ec0b17eb4  predictor.py

@agb94
Copy link

agb94 commented Jul 23, 2018

Oh, I slightly modified the file after submitting, but I haven't noticed it... :( Sorry and thank you for checking again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
participant Participant of the CodRep-competition
Development

No branches or pull requests

2 participants