e.g. mhealth
Search Results (1 to 3 of 3 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 2 Journal of Medical Internet Research
- 1 Interactive Journal of Medical Research
- 0 Medicine 2.0
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Human Factors
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 JMIR Formative Research
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Because the Ada Boost model yielded the highest AUROC score among the various decision tree models tested, we selected this model to identify the most important features for predicting ND. The importance of each feature was extracted using the feature importance attributes of the Ada Boost model. We selected the top 15 features with the largest impact on the model and plotted them on a bar graph to visualize their influence on the model predictions.
J Med Internet Res 2024;26:e56922
Download Citation: END BibTex RIS

To predict the treatment outcome for SWL candidates, we used the Ada Boost algorithm based on the ensemble learning method, a machine learning technique that combines several base classifiers in various formats to produce a more robust and optimal classification model. Compared to other conventional machine learning algorithms, ensemble learning techniques are more stable, faster, simpler, and easier to program [15-19].
Interact J Med Res 2022;11(1):e33357
Download Citation: END BibTex RIS

We also include the median percent error [87], which is the percentage difference of the prediction f(x(i)) and the label y(i) for each instance {x(i), y(i)}, computed as:
We observed that random forest regression had the lowest mean test error in the interpolation method (0.031) and adaptive boosting (Ada Boost) regression had the lowest mean test errors in the extrapolation and cross-validation methods (0.089 and 0.167, respectively) (see Table 5, Table 6, and Table 7).
J Med Internet Res 2021;23(4):e26628
Download Citation: END BibTex RIS