# Our Expertise

AMSTAT Consulting uses machine learning that is an artificial intelligence technology that provides systems with the ability to learn without being explicitly programmed. Our clients cite these reasons for choosing to work with us:

• Everyone on our team is a PhD-trained scientist with experience deploying machine learning tools for many different problems and industries.
• We have Ph.D. in statistics at leading universities including Harvard, Stanford, and Columbia.
• We use modern data science and machine learning tools that make predictive models, classification, segmentation, and natural language processing easier and more powerful than ever before.
• Our data wrangling expertise makes quick work of any dataset to deliver finished projects fast.
• Our experience enables us to pick the right tool for the job, whether it’s a deep convolutional neural network or a linear model.
• Our application development experience allows us to tightly integrate predictive models with your app, dashboard, reporting, API, and other components of your infrastructure.

## PhD in Statistics at Leading Universities including Harvard, Stanford, and Columbia

All of our principals have PhD in statistics at leading universities including Harvard, Stanford, and Columbia.

### PhD in Statistics at Leading Universities including Harvard, Stanford, and Columbia

All of our principals have PhD in statistics at leading universities including Harvard, Stanford, and Columbia.

## Nationally Renowned Machine Learning Experts

They include nationally renowned machine learning experts.

### Nationally Renowned Machine Learning Experts

They include nationally renowned machine learning experts.

## Extensive Background in Machine Learning

They have extensive backgrounds in machine learning and over 100 years of practical experience in quantitative methods.

### Extensive Background in Machine Learning

They have extensive backgrounds in statistics and over 100 years of practical experience in quantitative methods.

## Deep Knowledge of Advanced Machine Learning Algorithms

We utilize deep knowledge of advanced machine learning algorithms.

### Deep Knowledge of Advanced Machine Learning Algorithms

We utilize deep knowledge of advanced machine learning algorithms.

Four different types of machine learning algorithms are available that can be organized into a taxonomy based on the desired outcome of the algorithm or the type of input available for training the machine. We can use machine learning:

Supervised Learning

• Most machine learning is supervised learning.
• Supervised learning algorithms are “trained” using labeled examples where the desired output is known.
• It is called supervised learning because the process of an algorithm learning from the training dataset can be thought of as a teacher supervising the learning process
• We know the correct answers, the algorithm iteratively makes predictions on the training data and is corrected by the teacher.
• Learning stops when the algorithm achieves an acceptable level of performance.
• Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output.
• The goal is to approximate the mapping function so well that when you have new input data (x) that you can predict the output variables (Y) for that data.
• Supervised learning problems can be further grouped into regression and classification problems.
• Classification: A classification problem is when the output variable is a category, such as “red” or “blue” or “disease” and “no disease”.
• Regression: A regression problem is when the output variable is a real value, such as “dollars” or “weight”.

We can:

• Determine the input feature representation of the learned function.
• The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object. The number of features should not be too large, because of the curse of dimensionality; but should contain enough information to accurately predict the output.
• Determine the structure of the learned function and corresponding learning algorithm
• Complete the design
• Run the learning algorithm
• Some supervised learning algorithms require the user to determine certain control parameters. These parameters may be adjusted by optimizing performance on a subset (called a validation set) of the training set, or via cross-validation.
• Evaluate the accuracy of the learned function

Unsupervised Learning

• About 10 to 20 percent of machine learning is unsupervised learning.
• Unsupervised learning is a type of machine learning where the system operates on unlabeled examples. In this case, the system is not told the “right answer.”
• The algorithm tries to find a hidden structure or manifold in unlabeled data.
• Unsupervised learning is where you only have input data (X) and no corresponding output variables.
• The goal of unsupervised learning is to model the underlying structure or distribution in the data in order to learn more about the data.
• These are called unsupervised learning because unlike supervised learning above there is no correct answers and there is no teacher. Algorithms are left to their own devices to discover and present the interesting structure in the data.
• Unsupervised learning problems can be further grouped into clustering and association problems.
• Clustering: A clustering problem is where you want to discover the inherent groupings in the data, such as grouping customers by purchasing behavior.
• Association:  An association rule learning problem is where you want to discover rules that describe large portions of your data, such as people that buy X also tend to buy Y.

Semisupervised Learning

• Problems where you have a large amount of input data (X) and only some of the data is labeled (Y) are called semi-supervised learning problems.
• These problems sit in between both supervised and unsupervised learning.
• Methods
•  Generative models
• Generative approaches to statistical learning first seek to estimate {\displaystyle p(x|y)}, the distribution of data points belonging to each class.
• Generative models assume that the distributions take some particular form {\displaystyle p(x|y,\theta )} parameterized by the vector {\displaystyle \theta }. If these assumptions are incorrect, the unlabeled data may actually decrease the accuracy of the solution relative to what would have been obtained from labeled data alone.  However, if the assumptions are correct, then the unlabeled data necessarily improve performance.
• Low-density separation
• It attempts to place boundaries in regions where there are few data points (labeled or unlabeled). One of the most commonly used algorithms is the transductive support vector machine, or TSVM (which, despite its name, may be used for inductive learning as well).
• Graph-based methods
• Graph-based methods for semi-supervised learning use a graph representation of the data
• Heuristic approaches
• It is not intrinsically geared to learning from both unlabeled and labeled data, but instead, makes use of unlabeled data within a supervised learning framework.

Reinforcement Learning

AMSTAT Consulting uses reinforcement learning to discover for itself which actions yield the greatest rewards through trial and error. Reinforcement learning has three primary components:

• The agent – the learner or decision maker
• The environment – everything the agent interacts with
• Actions – what the agent can do

Algorithms for reinforcement learning include:

• Criterion of optimality
• Policy
• The agent’s action selection is modeled as a map called policy
• The policy map gives the probability of taking action “a” when in state “s.”
• State-value function
• Brute Force
•  The brute force approach entails two steps:
•  For each possible policy, sample returns while following it.
• Choose the policy with the largest expected return
• Value Function
• Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the “current” [on-policy] or the optimal [off-policy] one).
• These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best-expected return from an initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies.

Generalization, Evaluation and Model Selection

We can:

• Use all types of machine learning that develop models that enable the learning machine to perform accurately on new, unseen examples or tasks
• Improve these models by using the machine
• Want the fit to be not too much, not too little, but just right
• Look at data of any complexity and size and build a model that sizes well to that data
• Look at all the data or a subset to create an accurate model
• One of the more powerful machine learning algorithms is a random forest.  A random forest takes individual decision trees and combines them. When a new input is entered into the system, it runs down all of the trees. The result is either an average or a weighted average of all the terminal nodes that are reached.
• Validate a model to determine whether it can make effective predictions
• Use a training data set to develop the model
• Use known out-of-sample data to test it

Data Analytics

Today, the vast majority of enterprises have needs for descriptive analytics, which are necessary for effective management, but not sufficient to accelerate business performance. In order to scale to a higher level of responsiveness, enterprise organizations need to move beyond descriptive analytics and climb up the intelligence capability pyramid. We can use machine learning to help you with:

• Descriptive Analytics
• Diagnostic Analytics
• Predictive Analytics
• Predictive Modeling
• Automated Modeling
• Geospatial Analysis
• Text Analytics
• Social Network Analysis
• Entity Analytics
• Prescriptive Analytics
• Cognitive Analytics
• Operational Analytics
• Supply Chain Analytics
• Complexity Management
• End-to-End Optimization
• Supply Chain Risk Management

We can:

• Build a portfolio
• Deliver solutions

1. Build a Portfolio

We can demonstrate our ability to deliver by building a portfolio of completed machine learning projects.

Our Portfolio

We can:

• Pick a theme. This is the type of projects that we want to work on.
• Complete projects. We can apply our process to the dataset in order to deliver a result.
• Write up our findings. We can write up our findings.

2. Deliver Solutions

We can deliver solutions.

ML (Machine Learning) is at the heart of Data Science. It powers predictive technology. We can apply it to serve the following business objectives:

• Reduce user/customer attrition with churn prediction
• Acquire new customers through lead scoring and marketing campaigns optimization
• Cross-sell products with targeted campaigns and personalized recommendations
• Optimize products and pricing by finding patterns in commerce data
• Increase customer engagement by predicting their needs and interests
• Improve operations by predicting demand (or improve resource management by predicting usage)
• Save time by automating tasks
• Make your team more productive, with predictive enterprise apps

We emphasize the use of machine learning to create predictive models:

• Customer Satisfaction Prediction
• Drug Selection for Treating Heart Problems
• Predicting Financial Performance of a Company
• Forecast Profits for Clothing Sales

We can:

• Drive data scientist productivity.
We focus on speeding up analysis by using big data platforms such as Apache Spark, automating portions of the data science life cycle, and improving the usability of the data science workbench.
• Include multiple model deployment methods.
Production models must be embedded in applications and business processes to provide business value. AMSTAT Consulting can deploy models in multiple ways, including as code embedded directly into applications, exposed as a service callable by applications, or injected into other platforms such as databases. Some of the more mature PAML (predictive analytics and machine learning) vendors include or are integrated with decision management platforms that allow AD&D pros and business users to use a visual metaphor or express decision logic as a set of business rules that can also include model
• Provide sophisticated model management.
The very nature of predictive models is that they may lose accuracy over time. More mature PAMl solutions include features to monitor the ongoing efficacy of models in production by comparing model output with established key performance indicators and testing new models using a champion/challenger or A/B testing scheme.
• Allow polyglot programming.
AMSTAT Consulting uses more than one programming language because of open-source add-on libraries such as CRAN for R and scikit-learn for Python.
• Expand to Apache Spark.
Apache Spark is an open-source, primarily in-memory cluster computing platform that also includes Spark Ml, a set of machine learning libraries that data scientists are increasingly interested in using. In addition to Spark Ml, other machine libraries such as H2o.ai’s sparkling Water and IBM’s SystemML run on Spark.
• Build the foundation for AI and invest in deep learning.
Machine learning models are a key building block of AI applications. We use any of the PAML solutions to build models for use in AI applications. Deep learning is a branch of machine learning that we use to build models based on artificial neural networks. This method is particularly good at creating models for image recognition (including facial recognition), but it is applicable to more traditional use cases as well. We incorporate numerous open source libraries, such as Caffe, MXNet, and TensorFlow, into PAML solutions, or they are creating their own deep-learning algorithms built into the platform.

We can use Principal Component Analysis in countless machine learning applications:

• Fraud Detection
• Word and Character Recognition
• Speech Recognition
• Email Spam Detection
• Texture Classification
• Face Recognition

Principal Component Analysis converts a set of possible correlated features into a set of linearly uncorrelated features called principal components.

We can reduce the dimensionality:

• Reduce memory and disk space need to store the data
• Reveal hidden, simplified structures
• Solve issues of multicollinearity
• Visualize higher-dimensional data
• Detect outliers

We can visualize data after dimension reduction:

• Which Medicare providers are similar to each other?
• Which Medicare providers are outliers?
• Further Exploratory Analysis
• External Knowledge for Deeper Understanding of The Groups

Clustering is unsupervised learning.

• No predefined classes
• No examples demonstrating how the data should be grouped

Clustering is a method of data exploration.

• A way of looking for patterns or structure in the data that are of interest
• As a stand-alone tool to get insight into data distribution
• As a processing step for other algorithms

Grouping

We can:

• Group them based on what they do
• Group them based on where they live
• Use multiple variables and do cluster analysis with a similarity/dissimilarity measure
• Cluster them based on their shopping behavior
• Discover distinct groups in their customer data sets, and then use this knowledge to develop targeted marking programs (e.g., fresh food lovers, junk food lovers)

Major Clustering Approaches

• Partitioning algorithm
• We can construct various partitions and then evaluate them by some criterion
• Hierarchical algorithms
• We can create a hierarchical decomposition of the set of data using some criterion
• Hard clustering: Each observation belongs to exactly one cluster
• Soft clustering: An observation can belong to more than one cluster to a certain degree (e.g., likelihood of belonging to the cluster)

How to Choose a Clustering Algorithm

•  Is the algorithm scalable?
• Does it handle different types of attributes?
• Do you have to specify the number of clusters?
• How much control do you have on the parameters and on the output?
• How does it handle noise and outliers?
• Is it sensitive to order of observations?
• Can it handle high dimensional data?
• Are the results interpretable?

K-means Clustering Summary

• Simple, understandable, efficient
• Items automatically assigned to clusters
• Can be used as a pre-clustering step
• Other clustering algorithms can be applied on smaller sub-spaces.
•  Must pick number of clusters k
•  All items forced into a cluster
•  Too sensitive to outliers and noise
• Does not work well with non-cluster cluster shape

Similarity vs Dissimilarity

•  Depends on what we want to find or emphasize in the data
•  Depends on the type of attributes in your data
• Measures the relationship between 2 observations
• Weighting the attributes might be necessary.
• Some of the clustering algorithms use distance matrices as input.

Similarity

• Cosine similarity
• Inverse of distance measures values

Dissimilarity

• Euclidean distance
• Manhattan distance

Internal vs External

We can tell which clustering you need.

Internal criterion

• Good clustering will produce high-quality clusters in which:
• The intra-cluster similarity is high.
• The inter-cluster similarity is low.

External criterion

• Quality measured by its ability to discover some or all of the hidden patterns or latent classes in gold standard data
• Assess a clustering with respect to ground truth

Estimating K: Reference Distribution

We can use the following methods to compare a clustering solution in the training data to a clustering solution in a reference distribution:

• Aligned box criterion (ABC)
• Gap statistic
• Cubic clustering criterion (CCC)

When to Use Clustering

• Segmentation
• Customer, product, store
• Anomaly detection
• Outliers typically belong to clusters with 1 observation.
• Identify fraud transactions
•  Prepare for other techniques
•  Summarize the documents =clusters and use centroids
• Predictive modeling on segments
• Logistic regression results can be improved by performing it on smaller clusters
• Missing value imputation
• Decrease dependence between attributes
• Pre-processing step

MANUFACTURING

• Estimating warranty reserves
• Forecasting demand
• Optimizing process and predicting maintenance
• Orchestrating telematics

RETAIL

• Providing predictive inventory planning
• Driving recommendation engines, upsell, and cross-sell opportunities
• Automating intelligent market segmentation and targeting

HEALTHCARE AND LIFE SCIENCES

• Providing real-time alerts and patient diagnostics
• Identifying diseases and risk stratification
• Optimizing patient triage
• Driving proactive health management
• Analyzing healthcare

ENERGY, UTILITIES, & FEEDSTOCK

• Analyzing power usage
• Processing seismic data
• Optimizing energy demand and supply
• Automating intelligent grid management
• Recommending customer pricing

FINANCIAL SERVICES

• Providing risk analysis and regulation
• Evaluating credit
• Segmenting customers
• Recommending cross-sell & upsell opportunities
• Automating sales & marketing campaigns

TRAVEL & HOSPITALITY

• Analyzing traffic patterns and congestion management
• Scheduling aircrafts
• Creating dynamic prices
• Automating social feedback & interaction

Dr. Zamir S. Brelvi MD, PhD., CEO & Co-Founder, EndoLogic

“We have been very pleased with working with AMSTAT Consulting Analytics Group. The service was custom tailored and on time completion. The statistical report was detailed with excellent graphics. The cost of the services was affordable for a start-up company such as EndoLogic! Dr. Ann is very detail oriented and likes to know the project thoroughly that is being analyzed.”

Dr. Raj Singhal, MD., Director, Pediatric Anesthesiology, Phoenix Children’s Hospital

“Dr. Ann has been instrumental in helping with our statistical needs. In addition to her professionalism, she has been prompt and thorough with all of our requests. Dr. Ann’s work is impeccable, and I would recommend her services to anyone in need of assistance with statistical methods or interpretation. We plan on using Dr. Ann for all of our future needs, and I am thrilled to have been introduced to her.”

Dr. Haritha Boppana, MD, DHA, GHS Greenville Memorial Hospital

“I am a physician and was in need of statistical analysis of research data. I found AMSTAT Consulting Analytics Group on online search. Dr. Ann called me and explained the process involved in data analysis. Dr. Ann was always very prompt, helpful, intelligent and took time explaining the various tests used in conducting data analysis. Thank you so much!! I look forward to working with you in the future.”

Dr. Vincent Salyers, Dean, Faculty of Nursing, MacEwan University

“I have worked closely with AMSTAT Consulting Analytics Group on the data analysis/results of two research projects so feel as though I am knowledgeable about their expertise. On all accounts, the company provided me with reliable statistical analysis and results that I could translate into publishable format. They are conscientious experts who provide keen insights into appropriate data analysis given various data sets. I highly recommend them for your data support needs!”

Dr. Nancy Allen, Curriculum and Technology Consultant

“My project required the analysis of a complex survey that required a great deal of help in organizing the data and analyses. In addition, the project required a quick turn-around. AMSTAT Consulting Analytics Group asked all the right questions, made realistic and helpful suggestions, and completed the project in a timely manner. They were professional and helpful throughout the process. I highly recommend them.”

# TRUE.COM

Dr. David FettermanAdvisory Board (Fetterman & Associates, President)
EDUCATION

Ph.D., Stanford University
Master’s Degree, Stanford University
Master’s Degree, Stanford University

EXPERIENCE

Stanford University, Professor
School of Medicine, Stanford University, Director of Evaluation

HONORS (selected)

American Educational Research Association Research on Evaluation Distinguished Scholar Award, 2013
American Evaluation Association Advocacy and Use Evaluation Award, 2014
Lazarsfield Award for Contributions to Evaluation Theory, American Evaluation Association, 2000
Mensa Education and Research Foundation Award for Excellence, 1990.
Myrdal Award for Cumulative Contributions to Evaluation Practice, American Evaluation Association, 1995
Outstanding Higher Education Professional, Neag School of Education, University of Connecticut, 2008
Who’s Who in America, 1990, 1995-1996, 1999, 2008-2012
Who’s Who in American Education, 1989 90, 1995-96, 2003
Who’s Who in Science and Engineering, 2010, 2011
Who’s Who in the World, 2011, 2012, 2013

PROJECTS (selected)

$15 Million Digital Divide Project, Hewlett-Packard Philanthropy and Education W. K. Kellogg Foundation Tobacco Prevention, Minority Initiative Sub-recipient Grant Office, University of Arkansas at Pine Bluff Tsholofelo Community, South Africa Corte Madera, Portola Valley School District, CA Family and Children Services, Palo Alto, CA Ministry of Health and Jimma University, Ethiopia BUILD, Palo Alto Case Method, Columbia School of Journalism Digital Media Center, Knight Foundation Knight New Media Center, Knight Foundation Te Puni Kokiri, Ministry of Maori Development, New Zealand National Institute of Multimedia Education, Japan Knight Foundations, Western Knight Center for Specialized Journalism Mosaic’s Project, California State University Arkansas Department of Education One East Palo Alto. City Revitalization Project Hewlett Foundation National Indian Child Welfare Association. Intertribal Council of Michigan, Hannahville Indian Community Independent Development Trust, Cape Town, South Africa California Arts Council BOOKS (selected) Fetterman, D.M. (2013). Empowerment Evaluation in the Digital Villages: Hewlett-Packard’s$15 Million Race Toward Social Justice. Stanford: Stanford University Press. (See Stanford Social Innovations site: http://www.ssireview.org/articles/entry/empowerment_evaluation_in_the_digital_villages_hewlett_packards_15_million)
Fetterman, D.M., Kaftarian, S., and Wandersman, A. (2014) (eds.) Empowerment Evaluation: Knowledge and Tools for Self-assessment, Evaluation Capacity Building, and Accountability. Thousand Oaks, CA: Sage.
Fetterman, D.M., Rodriguez-Campos, L., and Zukowski, A. (in press). Collaborative, Participatory, and Empowerment Evaluation: Stakeholder Involvement Approaches to Evaluation. New York: Guilford Publications.
Fetterman, D.M., Kaftarian, S., and Wandersman, A. (2015). Empowerment Evaluation: Knowledge and Tools for Self-assessment, Evaluation Capacity Building, and Accountability. Thousand Oaks, CA: Sage.
Fetterman, D.M. and Wandersman, A. (2005). Empowerment Evaluation Principles in Practice. New York: Guilford Publications. (Preview.)
Fetterman, D.M. (2001). Foundations of Empowerment Evaluation. Thousand Oaks, CA: Sage. (Preview.)
Fetterman, D.M., Kaftarian, S., Wandersman, A. (Eds.) (1996). Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability. Newbury Park, CA: Sage.
(Preview.)
Fetterman, D.M. (Ed.) (1993). Speaking the Language of Power: Communication, Collaboration, and Advocacy. London, England: Falmer Press. (Preview.)

CHAPTERS AND ARTICLES (selected – over 100)

Fetterman, D.M. (in press). Empowerment Evaluation. The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation. Thousand Oaks, CA: Sage.
Fetterman, D.M. and Ravitz, J. (in press). Evaluation Capacity Building. The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation. Thousand Oaks, CA: Sage.
Fetterman, D.M. (in press). Empowerment Evaluation: Linking Theories, Principles, and Concepts to Practical Steps. In Secolsky, C. and Denison, D.B. (eds.) Handbook on Measurement, Assessment, and Evaluation in Higher Education (2nd edition). New York: Routledge.
Fetterman, D.M. (2015). Empowerment Evaluation. International Encyclopedia of the Social and Behavioral Sciences, 2nd edition.
Mansh, M., White, W., Gee-Tong, L., Lunn, M., Obedin-Maliver, J., Stewart, L., Goldsmith, E., Brenman, S., Tran, E., Wells, M., Fetterman, D.M., Garcia, G. (2015). Sexual and Gender Minority Identity Disclosure During Undergraduate Medical Education: “In the Closet” in Medical School. Academic Medicine, 90(5):634-644.
Wang JY, Lin H, Lewis PY, Fetterman DM, Gesundheit N. (2015). Is a career in medicine the right choice? The impact of a physician shadowing program on undergraduate premedical students. Acad Med. May, 90(5):629-33. doi: 10.1097/ACM.0000000000000615
White, W., Brenman, S., Paradis, E., Goldsmith, E.S., Lunn, M.R., Obedin-Maliver, J., Stewart, L., Tran, E., Wells, M., Chamberlain, L.J., Fetterman, D.M., and Garcia, G. (2015). Lesbian, Gay, Bisexual, and Transgender Patient Care: Medical Students’ Preparedness and Comfort. Teaching and Learning in Medicine: An International Journal. Volume, 27, Issue 3: 254-263
Obedin-Maliver, J., Goldsmith, E.S., Stewart, L., White, W., Tran, E., Brenman, S., Wells, M., Fetterman, D.M., Garcia, G., Lunn, M.R. (2011). Lesbian, Gay, Bisexual, and Transgender-Related Content in Undergraduate Medical Education. JAMA, 306(9):971-977.
Fetterman, D.M., Kaftarian, S., and Wandersman, A. (2015). Empowerment evaluation is a systematic way of thinking: A response to Michael Patton Empowerment evaluation: Knowledge and tools for self-assessment, evaluation capacity building, and accountability. Evaluation and Program Planning 52 (2015) 10–14
Fetterman, D.M. (2011). Empowerment Evaluation and Accreditation Case Examples: California Institute of Integral Studies and Stanford University. In Secolsky, C. and Denison, D.B. (eds.) Handbook on Measurement, Assessment, and Evaluation in Higher Education. New York: Routledge.
Fetterman, D.M., Deitz, J., and Gesundheit, N. (2010). Empowerment evaluation: A collaborative approach to evaluating and transforming a medical school curriculum. Academic Medicine, 85(5):813-820.
Fetterman, D.M. (2009). Empowerment evaluation at the Stanford University School of Medicine: Using a Critical Friend to Improve the Clerkship Experience. Ensaio: Avaliação e Políticas Públicas em Educação. Rio je Janeiro, 17(63):197-204.
Fetterman, D.M. (2004). Empowerment Evaluation’s Technological Tools of the Trade. Harvard Family Research Project. The Evaluation Exchange, X 3, p. 8-9.
Fetterman, D.M. (2003). Empowerment Evaluation Strikes a Responsive Chord. In S. Donaldson & Scriven, M. (Eds.) Evaluating social programs and problems: Visions for the new millennium. Hillsdale, NJ: Erlbaum.
Fetterman, D.M. and Bowman, C. (2001). Experiential Education and Empowerment Evaluation: Mars Rover Educational Program Case Example. Journal of Experiential Education.
Fetterman, D.M. (2002). Web surveys to Digital Movies: Technological Tools of the Trade. Educational Researcher, 31(6):29-37 or http://aera.net
Fetterman, D.M. (1998). Teaching in the Virtual Classroom at Stanford University. The Technology Source.
Fetterman, D.M. (1998). Webs of Meaning: Computer and Internet Resources for Educational Research and Instruction. Educational Researcher, 27(3):22-30.
Fetterman, D.M. (1998). Learning with and about technology: A middle school nature area. Meridian, 1(1)
Fetterman, D.M. (1996). Empowerment Evaluation: An Introduction to Theory and Practice. In Fetterman, D.M., Kaftarian, S., and Wandersman, A. (eds.) Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability. Newbury Park, CA: Sage.
Fetterman, D.M. (1996). Conclusion: Reflections on Emergent Themes and Next Steps. In Fetterman, D.M., Kaftarian, S., and Wandersman, A. (eds.) Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability. Newbury Park, CA: Sage.
Fetterman, D.M. (1996). Videoconferencing On-Line: Enhancing Communication Over the Internet. Educational Researcher, 25(4)
Fetterman, D.M. (1995). In Response to Dr. Daniel Stufflebeam’s: “Empowerment Evaluation, Objectivist Evaluation, and Evaluation Standards: Where the Future of Evaluation Should Not Go and Where It Needs to Go,” Evaluation Practice, June 1995, 16(2):179-199.
Fetterman, D.M. (1994). Gifted and Talented Education Program Evaluation. In Sternberg, R.J. (ed.) Encyclopedia of Human Intelligence. New York, NY: Macmillan Publishing Company.
Fetterman, D.M. (1994). The Terman Study. In Sternberg, R.J. (ed.) Encyclopedia of Human Intelligence. New York, NY: Macmillan Publishing Company.
Fetterman, D.M. (1994). Keeping Research on Track. New Directions for Program Evaluation. No. 63, Fall. San Francisco, CA: Jossey-Bass, pp. 103-105.
Fetterman, D.M. (1994). Empowerment Evaluation. Presidential Address. Evaluation Practice, 15(1):1-15.
Fetterman, D.M. (1992). Hevrah: Our Intellectual Community. Anthropology and Education Quarterly, 23(4):271-274.
Fetterman, D.M. (1992) Evaluate Yourself. Storrs, CT: National Research Center on the Gifted and Talented.
Fetterman, D.M. (1991). Evaluation in Multi-Site and Multi-Focus Projects. Revitalizing Rural America: New Strategies for the Nineties. Georgia Center for Continuing Education. Athens, GA: The University of Georgia.
Fetterman, D.M. (1990). Health and Safety Issues: Colleges Must Take Steps to Avert Serious Problems. The Chronicle of Higher Education, March 21, A48.
Fetterman, D.M. (1989). Anthropology Can Make a Difference. In Trueba, H., G. Spindler, and Spindler, L. (Eds.) What Do Anthropologists Have to Say About Dropouts? New York, NY: Falmer Press, 1989.
Fetterman, D.M. (1988). Stanford Special Review on Health and Safety Phase II: A Report on Allegations. Internal Audit Department. Stanford, CA: Stanford University.
Fetterman, D.M. (1988). Gifted and Talented Education. In Gorton, R.A., Schneider, G.T., and Fisher, J.C. (Eds.) Encyclopedia of School Administration and Supervision. Phoenix, AZ: Oryx Press.
Fetterman, D.M. (1986). Operational Auditing in a Teaching Hospital: A Cultural Approach, Internal Auditor, 43(2):48-54.
Fetterman, D.M. (1986). Evaluating Organizational Culture in a Teaching Hospital: The Use of Cultural Concepts and Techniques. In K. Sedgwick (Ed.), Association of College and University Auditors. Logan, Utah: Utah State University.
Fetterman, D.M. (1982). Ibsen’s Baths: Reactivity and Insensitivity (A misapplication of the treatment-control design in a national evaluation). Educational Evaluation and Policy Analysis, 4 (3):261-279.
Fetterman, D.M. (1981). Protocol and Publication: Ethical Obligations. Anthropology and Education Quarterly, 7(1):82-83.

Empowerment Evaluation in the Digital Villages (book), KAZI FM, Houston, Texas, March 29, 2013.
Chronicle of Philanthropy article about evaluation and nonprofit survival (Chronicle), WPFM FM, Washington, D.C. March 25, 2013.
Empowerment Evaluation in the Digital Villages (book), Kathryn Zox Show, March 13, 2013.
Empowerment Evaluation in the Digital Villages (book), Money Matters Network, Host Stu Taylor, January 28, 2013.
Empowerment Evaluation in the Digital Villages (book), WKXL-AM, Concord, New Hampshire, Host Bill Kearney, January 17, 2013.
Empowerment Evaluation in the Digital Villages (book), WPHM-AM Detroit, Host Paul Miller, January 14, 2013.
Empowerment Evaluation in the Digital Villages (book), Business Matters Radio, Host Thomas White, January 14, 2013.

BLOGS (selected)

Fetterman, D.M. (2014) David Fetterman on Google Glass Part I: Redefining Communications. AEA365. American Evaluation Association. http://aea365.org/blog/david-fetterman-on-google-glass-part-i-redefining-communications/ (April 17.)
Fetterman, D.M. (2014) David Fetterman on Google Glass Part II: Using Glass as an Evaluation Tool. AEA365. American Evaluation Association. http://aea365.org/blog/david-fetterman-on-google-glass-part-ii-using-glass-as-an-evaluation-tool/ (April 18.)
Fetterman, D.M. (2013). In These Uncertain Times, Charities Need a Survival Plan. The Chronicle of Philanthropy. March 10. http://philanthropy.com/article/In-These-Uncertain-Times/137741/
Fetterman, D.M. (2013). Surviving the Fiscal Cliff: The one thing every nonprofit should do in the face of federal tax increases and spending cuts. Stanford Social Innovation Review. http://www.ssireview.org/blog/entry/surviving_the_fiscal_cliff (January).
Fetterman, D.M. (2012). Empowerment Evaluation in the Digital Villages. Stanford Social Innovation Review. http://www.ssireview.org/articles/entry/empowerment_evaluation_in_the_digital_villages_hewlett_packards_15_million (December)
Fetterman, D.M. (2012). Corporate Philanthropy Tackles the Digital Divide. Stanford Social Innovation Review. http://www.ssireview.org/blog/entry/corporate_philanthropy_tackles_the_digital_divide (November)

ENCYCLOPEDIA (selected): The International Encyclopedia of Education and Encyclopedia of Human Intelligence

Dr. Ann E.K. UmPresident and CEO
EDUCATION
Doctorate Degree, Columbia University
Master’s Degree, Stanford University
Master’s Degree, Columbia University

EXPERIENCE

Harvard Medical School, Data Science Manager
Harvard Medical School, Brigham and Women’s Hospital, Data Science Manager
The University of Texas, Assistant Professor

PUBLICATIONS (selected)

Autonomy Support, Self-Concept, and Mathematics Performance: A Structural Equation Analysis. Saarbrucken, Germany: VDM Verlag, 2010.
Motivation and Mathematics Achievement: A Structural Equation Analysis, Saarbrucken. Saarbrucken, Germany: VDM Verlag, 2008.
Motivation and Mathematics Performance: A Structural Equation Analysis. ProQuest, 2006.
Motivation and Mathematics Performance: A Structural Equation Analysis (doctoral dissertation). Columbia University, New York, 2005.

PRESENTATIONS (selected)

Motivation and Mathematics Performance: A Structural Equation Analysis, National Council on Measurement in Education, Montreal, Quebec, Canada, 2005.
Comparing Eighth Grade Diagnostic Test Results for Korean and American Students, National Council on Measurement in Education, Chicago, Illinois, 2003.