在 sklearn 中的双图中绘制 PCA 加载和加载(如 R 的自动绘图) [英] Plot PCA loadings and loading in biplot in sklearn (like R's autoplot)

查看:92
本文介绍了在 sklearn 中的双图中绘制 PCA 加载和加载(如 R 的自动绘图)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在带有 autoplotR 中看到了这个教程.他们绘制了载荷和载荷标签:

autoplot(prcomp(df), data = iris, color = 'Species',loadings = TRUE, loadings.colour = 'blue',loadings.label = TRUE, loadings.label.size = 3)

解决方案

试试pca"库.这将绘制解释的方差,并创建一个双标图.

pip install pca从 PCA 进口 PCA# 初始化以将数据减少到解释 95% 方差的分量数.模型 = pca(n_components=0.95)# 或者将数据减少到 2 台 PC模型 = pca(n_components=2)# 拟合变换结果 = model.fit_transform(X)# 绘制解释方差图, ax = model.plot()# 分散前 2 台 PC图, ax = model.scatter()# 用特征数量制作双图图, ax = model.biplot(n_feat=4)

I saw this tutorial in R w/ autoplot. They plotted the loadings and loading labels:

autoplot(prcomp(df), data = iris, colour = 'Species',
         loadings = TRUE, loadings.colour = 'blue',
         loadings.label = TRUE, loadings.label.size = 3)

https://cran.r-project.org/web/packages/ggfortify/vignettes/plot_pca.html

I prefer Python 3 w/ matplotlib, scikit-learn, and pandas for my data analysis. However, I don't know how to add these on?

How can you plot these vectors w/ matplotlib?

I've been reading Recovering features names of explained_variance_ratio_ in PCA with sklearn but haven't figured it out yet

Here's how I plot it in Python

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn import decomposition
import seaborn as sns; sns.set_style("whitegrid", {'axes.grid' : False})

%matplotlib inline
np.random.seed(0)

# Iris dataset
DF_data = pd.DataFrame(load_iris().data, 
                       index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
                       columns = load_iris().feature_names)

Se_targets = pd.Series(load_iris().target, 
                       index = ["iris_%d" % i for i in range(load_iris().data.shape[0])], 
                       name = "Species")

# Scaling mean = 0, var = 1
DF_standard = pd.DataFrame(StandardScaler().fit_transform(DF_data), 
                           index = DF_data.index,
                           columns = DF_data.columns)

# Sklearn for Principal Componenet Analysis
# Dims
m = DF_standard.shape[1]
K = 2

# PCA (How I tend to set it up)
Mod_PCA = decomposition.PCA(n_components=m)
DF_PCA = pd.DataFrame(Mod_PCA.fit_transform(DF_standard), 
                      columns=["PC%d" % k for k in range(1,m + 1)]).iloc[:,:K]
# Color classes
color_list = [{0:"r",1:"g",2:"b"}[x] for x in Se_targets]

fig, ax = plt.subplots()
ax.scatter(x=DF_PCA["PC1"], y=DF_PCA["PC2"], color=color_list)

解决方案

Try the ‘pca’ library. This will plot the explained variance, and create a biplot.

pip install pca

from pca import pca

# Initialize to reduce the data up to the number of componentes that explains 95% of the variance.
model = pca(n_components=0.95)

# Or reduce the data towards 2 PCs
model = pca(n_components=2)

# Fit transform
results = model.fit_transform(X)

# Plot explained variance
fig, ax = model.plot()

# Scatter first 2 PCs
fig, ax = model.scatter()

# Make biplot with the number of features
fig, ax = model.biplot(n_feat=4)

这篇关于在 sklearn 中的双图中绘制 PCA 加载和加载(如 R 的自动绘图)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆