Predicting the outcomes of games – In this paper, we develop a method of using conditional independence (CaI) and conditional independencies (CaIn) to model both the expected outcomes of games and their rewards. The CaI based model achieves the highest expected outcomes of games with CaIn and Low CaIn. The CaI based model has several advantages: In this paper we demonstrate the ability to infer the expected outcomes of games from conditional independence and conditional independencies. The conditional independence and conditional independencies model is more robust to unknown game outcomes that require more explicit causal structure than the expected outcome of a game. Furthermore, conditional independencies only need to have the conditional independence condition and independence condition to allow us to reason about the game outcome for other reasons. We show that this approach, which does away with the need to consider any conditional independence condition, improves the inference of conditional independencies and conditional independencies over the CaI based model.
We propose a method for extracting features from visual images that has been well studied in visual and natural language processing. Our method is based on the convolutional neural network (CNN) and discriminative feature descriptors, both of which are a prerequisite for obtaining reliable and accurate visual segmentation. Previous work has focused on extracting features from video but not on human-level visual features. To tackle this, we use convolutional CNN that generates a fully convolutional network that learns features from a small number of labeled videos. The feature descriptor in this network is the input feature vector of a visual network, and thus we are able to easily infer the full descriptor by comparing the discriminative feature distribution across videos. Experiments on three public benchmark datasets demonstrate the importance of the discriminative feature descriptors and the ability to infer a single visual segmentation, in contrast to most state-of-the-art supervised and human-level visual segmentation methods.
Graph Classification: A Deep Neural Network Approach
A Bayesian Model for Data Completion and Relevance with Structured Variable Elimination
Predicting the outcomes of games
Graph learning via adaptive thresholding
Toward Distributed and Human-level Reinforcement Learning for Task-Sensitive LearningWe propose a method for extracting features from visual images that has been well studied in visual and natural language processing. Our method is based on the convolutional neural network (CNN) and discriminative feature descriptors, both of which are a prerequisite for obtaining reliable and accurate visual segmentation. Previous work has focused on extracting features from video but not on human-level visual features. To tackle this, we use convolutional CNN that generates a fully convolutional network that learns features from a small number of labeled videos. The feature descriptor in this network is the input feature vector of a visual network, and thus we are able to easily infer the full descriptor by comparing the discriminative feature distribution across videos. Experiments on three public benchmark datasets demonstrate the importance of the discriminative feature descriptors and the ability to infer a single visual segmentation, in contrast to most state-of-the-art supervised and human-level visual segmentation methods.