svlandeg
|
eb08bdb11f
|
hidden with for encoders
|
2019-05-21 23:42:46 +02:00 |
|
svlandeg
|
7b13e3d56f
|
undersampling negatives
|
2019-05-21 18:35:10 +02:00 |
|
svlandeg
|
2fa3fac851
|
fix concat bp and more efficient batch calls
|
2019-05-21 13:43:59 +02:00 |
|
svlandeg
|
0a15ee4541
|
fix in bp call
|
2019-05-20 23:54:55 +02:00 |
|
svlandeg
|
dd691d0053
|
debugging
|
2019-05-17 17:44:11 +02:00 |
|
svlandeg
|
400b19353d
|
simplify architecture and larger-scale test runs
|
2019-05-17 01:51:18 +02:00 |
|
svlandeg
|
b5470f3d75
|
various tests, architectures and experiments
|
2019-05-16 18:25:34 +02:00 |
|
svlandeg
|
9ffe5437ae
|
calculate gradient for entity encoding
|
2019-05-15 02:23:08 +02:00 |
|
svlandeg
|
2713abc651
|
implement loss function using dot product and prob estimate per candidate cluster
|
2019-05-14 22:55:56 +02:00 |
|
svlandeg
|
09ed446b20
|
different architecture / settings
|
2019-05-14 08:37:52 +02:00 |
|
svlandeg
|
4142e8dd1b
|
train and predict per article (saving time for doc encoding)
|
2019-05-13 17:02:34 +02:00 |
|
svlandeg
|
c6ca8649d7
|
first stab at model - not functional yet
|
2019-05-09 17:23:19 +02:00 |
|
svlandeg
|
9f33732b96
|
using entity descriptions and article texts as input embedding vectors for training
|
2019-05-07 16:03:42 +02:00 |
|
svlandeg
|
7e348d7f7f
|
baseline evaluation using highest-freq candidate
|
2019-05-06 15:13:50 +02:00 |
|
svlandeg
|
6961215578
|
refactor code to separate functionality into different files
|
2019-05-06 10:56:56 +02:00 |
|