Jivko Sinapov
Jivko Sinapov
Assistant Professor, Tufts University
Dirección de correo verificada de - Página principal
Citado por
Citado por
Curriculum learning for reinforcement learning domains: A framework and survey
S Narvekar, B Peng, M Leonetti, J Sinapov, ME Taylor, P Stone
Journal of Machine Learning Research 21 (181), 1-50, 2020
Glycosylation site prediction using ensembles of Support Vector Machine classifiers
C Caragea, J Sinapov, A Silvescu, D Dobbs, V Honavar
BMC bioinformatics 8 (1), 438, 2007
Vibrotactile Recognition and Categorization of Surfaces by a Humanoid Robot
J Sinapov, V Sukhoy, R Sahai, A Stoytchev
Robotics, IEEE Transactions on 27 (3), 488-497, 2011
Source task creation for curriculum learning
S Narvekar, J Sinapov, M Leonetti, P Stone
Proceedings of the 2016 international conference on autonomous agents …, 2016
Bwibots: A platform for bridging the gap between ai and human–robot interaction research
P Khandelwal, S Zhang, J Sinapov, M Leonetti, J Thomason, F Yang, ...
The International Journal of Robotics Research 36 (5-7), 635-659, 2017
Autonomous Task Sequencing for Customized Curriculum Design in Reinforcement Learning.
S Narvekar, J Sinapov, P Stone
IJCAI, 2536-2542, 2017
Learning Multi-Modal Grounded Linguistic Semantics by Playing" I Spy".
J Thomason, J Sinapov, M Svetlik, P Stone, RJ Mooney
IJCAI, 3477-3483, 2016
Automatic curriculum graph generation for reinforcement learning agents
M Svetlik, M Leonetti, J Sinapov, R Shah, N Walker, P Stone
Proceedings of the AAAI conference on artificial intelligence 31 (1), 2017
Detecting the functional similarities between tools using a hierarchical representation of outcomes
J Sinapov, A Stoytchev
Development and Learning, 2008. ICDL 2008. 7th IEEE International Conference …, 2008
Grounding semantic categories in behavioral interactions: Experiments with 100 objects
J Sinapov, C Schenck, K Staley, V Sukhoy, A Stoytchev
Robotics and Autonomous Systems 62 (5), 632-645, 2014
Interactive object recognition using proprioceptive and auditory feedback
J Sinapov, T Bergquist, C Schenck, U Ohiri, S Griffith, A Stoytchev
The International Journal of Robotics Research 30 (10), 1250-1262, 2011
Improving grounded natural language understanding through human-robot dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
2019 International Conference on Robotics and Automation (ICRA), 6934-6941, 2019
Interactive learning of the acoustic properties of household objects
J Sinapov, M Wiemer, A Stoytchev
Robotics and Automation, 2009. ICRA'09. IEEE International Conference on …, 2009
Learning relational object categories using behavioral exploration and multimodal perception
J Sinapov, C Schenck, A Stoytchev
2014 IEEE international conference on robotics and automation (ICRA), 5691-5698, 2014
A behavior–grounded approach to forming object categories: Separating containers from non-containers
S Griffith, J Sinapov, V Sukhoy, A Stoytchev
IEEE Transactions on Autonomous Mental Development, 2011
Opportunistic active learning for grounding natural language descriptions
J Thomason, A Padmakumar, J Sinapov, J Hart, P Stone, RJ Mooney
Conference on robot learning, 67-76, 2017
Object category recognition by a humanoid robot using behavior-grounded relational learning
J Sinapov, A Stoytchev
Robotics and Automation (ICRA), 2011 IEEE International Conference on, 184-190, 2011
Jointly improving parsing and perception for natural language commands through human-robot dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
Journal of Artificial Intelligence Research 67, 327-374, 2020
Learning and generalization of behavior-grounded tool affordances
J Sinapov, A Stoytchev
Development and Learning, 2007. ICDL 2007. IEEE 6th International Conference …, 2007
Toward interactive learning of object categories by a robot: A case study with container and non-container objects
S Griffith, J Sinapov, M Miller, A Stoytchev
Development and Learning, 2009. ICDL 2009. IEEE 8th International Conference …, 2009
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20