Skip to main content Accessibility help
×
Hostname: page-component-7c8c6479df-p566r Total loading time: 0 Render date: 2024-03-28T21:55:35.660Z Has data issue: false hasContentIssue false

17 - Conclusions and Future Directions

from Part V - Conclusions

Published online by Cambridge University Press:  05 March 2012

Masashi Sugiyama
Affiliation:
Tokyo Institute of Technology
Taiji Suzuki
Affiliation:
University of Tokyo
Takafumi Kanamori
Affiliation:
Nagoya University, Japan
Get access

Summary

In this book we described a new approach to machine learning based on densityratio estimation. This density-ratio approach offers a novel research paradigm in the field of machine learning and data mining from theory and algorithms to application.

In Part II, various methods for density-ratio estimation were described, including methods based on separate estimations of numerator and denominator densities (Chapter 2), moment matching between numerator and denominator samples (Chapter 3), probabilistic classifications of numerator and denominator samples (Chapter 4), density fitting between numerator and denominator densities (Chapter 5), and direct fitting of a density-ratio model to the true densityratio (Chapter 6). We also gave a unified framework of density-ratio estimation in Chapter 7, which accommodates the various methods described above and is substantially more general – as an example, a robust density-ratio estimator was derived. Finally, in Chapter 8, we described methods that combine density-ratio estimation with dimensionality reduction. Among various density-ratio estimators, the unconstrained least-squares importance fitting (uLSIF) method described in Chapter 6 would be most useful practically because of its high computational efficiency by an analytic-form solution, the availability of cross-validation for model selection, its wide applicability to various machine learning tasks (Part III), and its superior convergence and numerical properties (Part IV).

In Part III we covered the usage of density-ratio estimators in various machine learning tasks that were categorized into four groups. In Chapter 9 we described applications of density ratios to importance sampling tasks such as non-stationarity/domain adaptation and multi-task learning.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×