训练基于个性化的机器学习模型 [英] Training personalized based machine learning models

查看:129
本文介绍了训练基于个性化的机器学习模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我与python一起使用一个烧瓶项目作为api的python项目,它根据先前在其他帖子上的参与程度以及纯粹基于用户来预测帖子上的用户.

i work on a php project along with python which uses flask as api which predict user like on a post based on the previous engagement on other posts and its purely user based.

我的要求是假设我们的系统中有1000个用户.而且他们以前也喜欢旧帖子.当新帖子到来时,我需要以某种方式确定用户是否喜欢.这是通过cron作业完成的

my requirement is suppose there are 1000`s of users in our system. and they have done likes for old posts before.when new posts arrive i need to somehow identify whether user likes it or not .and this is done via a cron job

方法1

我正在使用Logistic回归作为模型,因此可能每个用户都需要动态pkl文件.因为不同用户对同一帖子的参与度不同,所以我需要保存诸如model_ {user_id} .pkl之类的文件,其中user_id是用户ID用户的

i am using Logistic regression as model so probably need dynamic pkl file for each user.because different users engagement on same post is different so i need to save some thing like model_{user_id}.pkl file where user_id is the user id of the user

方法2

使用基于内容的推荐系统.但是据我所知,它不能像生产中的pkl文件那样存储.因此,对于1000个用户中的每个用户,我都需要运行荐荐功能.

use content based recommended system.but as far as i know it can't store like a pkl file in production. so for each users from the 1000`s of users i need to run the recommender function.

解决1个缺点

为每个用户创建动态pkl文件,这意味着会有更多文件.我在互联网上从未见过这种方法

creating dynamic pkl file for each user which means more files.i never seen this approach on internet

解决2个弊端

我相信为每个用户调用推荐功能可能是一个坏主意.这会严重影响cpu的使用等.

calling the recommender function for each user is probably a bad idea i believe .that will heavily affect cpu usage etc.

有人可以帮助我如何正确解决此问题吗?我是机器学习的新手.请考虑我的问题.预先感谢.

can somebody please help me how to properly solve this problem.i am new in machine learning . please consider my question. thanks in advance.

推荐答案

我建议这样:

  • 将用户模型创建为模型的数组(或数据框)
  • 将此数组另存为pkl
  • 加载应用程序时(不是在每个API调用上),请将模型数组加载到内存中
  • 调用API时,模型已经在内存中-使用它来预测结果

类似这样的东西(未经测试-只是一个概念):

Something like this (not tested - just a notion):

#for saving the model
model_data = pd.DataFrame(columns=['user','model'])
temp_model = RandomForestClassifier().fit(X,y)
new = pd.DataFrame({'user':[user_id],'model':[temp_model]})
model_data = model_data.append(new)
packed_model = jsonpickle.pickler.Pickler.flatten(model_data)

#for loading the model
unpacked_model = jsonpickle.unpickler.Unpickler.restore(packed_model) #this should be in the begining of your flask file - loaded into the memory
user_model=unpacked_model.at(user_id,'model') #this should be inside every api call

这篇关于训练基于个性化的机器学习模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆