svlandeg
|
dd691d0053
|
debugging
|
2019-05-17 17:44:11 +02:00 |
|
svlandeg
|
400b19353d
|
simplify architecture and larger-scale test runs
|
2019-05-17 01:51:18 +02:00 |
|
svlandeg
|
d51bffe63b
|
clean up code
|
2019-05-16 18:36:15 +02:00 |
|
svlandeg
|
b5470f3d75
|
various tests, architectures and experiments
|
2019-05-16 18:25:34 +02:00 |
|
svlandeg
|
9ffe5437ae
|
calculate gradient for entity encoding
|
2019-05-15 02:23:08 +02:00 |
|
svlandeg
|
2713abc651
|
implement loss function using dot product and prob estimate per candidate cluster
|
2019-05-14 22:55:56 +02:00 |
|
svlandeg
|
09ed446b20
|
different architecture / settings
|
2019-05-14 08:37:52 +02:00 |
|
svlandeg
|
4142e8dd1b
|
train and predict per article (saving time for doc encoding)
|
2019-05-13 17:02:34 +02:00 |
|
svlandeg
|
3b81b00954
|
evaluating on dev set during training
|
2019-05-13 14:26:04 +02:00 |
|
svlandeg
|
b6d788064a
|
some first experiments with different architectures and metrics
|
2019-05-10 12:53:14 +02:00 |
|
svlandeg
|
9d089c0410
|
grouping clusters of instances per doc+mention
|
2019-05-09 18:11:49 +02:00 |
|
svlandeg
|
c6ca8649d7
|
first stab at model - not functional yet
|
2019-05-09 17:23:19 +02:00 |
|
svlandeg
|
9f33732b96
|
using entity descriptions and article texts as input embedding vectors for training
|
2019-05-07 16:03:42 +02:00 |
|
svlandeg
|
7e348d7f7f
|
baseline evaluation using highest-freq candidate
|
2019-05-06 15:13:50 +02:00 |
|
svlandeg
|
6961215578
|
refactor code to separate functionality into different files
|
2019-05-06 10:56:56 +02:00 |
|