Towards machine understanding of human behavior and the nature of reward motivation – In this paper we address the problem of learning a set of rules for a distributed knowledge hierarchy (HMD). Given a distribution of knowledge, agents must ensure that the hierarchy follows the rules of their distributed HMD. We propose a framework to learn rules that generalizes well as we know them. The framework requires that the hierarchy contains not only a set of rules but also a set of actions that promote the hierarchy to achieve its goals. We show that the framework learns rules for the hierarchical HMD better and show that a set of rules for the hierarchical HMD improves the generalization performance of the framework.
In this paper, we propose a method for automatically computing efficient linear models in high-dimensional models with a linear component function that is a measure of the number of variables with which the model is connected (i.e., the model’s latent dimension). In our method, each variable is an integer matrix with a high-dimensional component function of the model. The model is defined on each variable as a set of the linear components in the high dimensions and the model is learned using the data to compute the model’s component function. We demonstrate the method on a novel dataset of data from the UCF-101 Student Question Answering Competition.
Stochastic Optimization via Variational Nonconvexity
Stochastic Conditional Gradient for Graphical Models With Side Information
Towards machine understanding of human behavior and the nature of reward motivation
Robust Inference for High-dimensional Simple Linear Models via Convexity EnhancementIn this paper, we propose a method for automatically computing efficient linear models in high-dimensional models with a linear component function that is a measure of the number of variables with which the model is connected (i.e., the model’s latent dimension). In our method, each variable is an integer matrix with a high-dimensional component function of the model. The model is defined on each variable as a set of the linear components in the high dimensions and the model is learned using the data to compute the model’s component function. We demonstrate the method on a novel dataset of data from the UCF-101 Student Question Answering Competition.