如何使距给定输入分布的距离最小? [英] how can I minimize the distance from a given input distribution?

查看:92
本文介绍了如何使距给定输入分布的距离最小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个客户列表,可以通过四种不同的方式激活"每个客户:

I have a list of customers and each of them can be "activated" in four different ways:

n= 1000
df = pd.DataFrame(list(range(0,n)), columns = ['Customer_ID'])
df['A'] = np.random.randint(2, size=n)
df['B'] = np.random.randint(2, size=n)
df['C'] = np.random.randint(2, size=n)

只有在与激活类型相关的布尔值等于1的情况下,才能在"A"或"B"或"C"上激活每个客户.

each customer can be activated either on "A" or on "B" or on "C" and only if the Boolean related to the type of activation is equal to 1.

在输入中,我具有最终激活的次数. es:

In input i have the count of the final activations. es:

Target_A = 500
Target_B = 250
Target_C = 250

代码中的随机值是优化器的输入,表示是否可以通过这种方式激活客户端.我如何才能将客户与其中一个客户联系起来,以遵守最终目标? 如何最大程度地减少实际激活计数与输入数据之间的距离?

The random values in code are an input for the optimizer and represent the possibility or not to activate the client in that way. How can I associate the client with only one of those in order to respect the final targets? How can I minimize the distance between the count of real activation and the input data?

推荐答案

您是否有经过测试的示例?我认为这可能有效,但不确定:

Do you have any tested examples? I think this might work but not sure:

import pandas as pd
import numpy as np
from pulp import LpProblem, LpVariable, LpMinimize, LpInteger, lpSum, value

prob = LpProblem("problem", LpMinimize)


n= 1000
df = pd.DataFrame(list(range(0,n)), columns = ['Customer_ID'])
df['A'] = np.random.randint(2, size=n)
df['B'] = np.random.randint(2, size=n)
df['C'] = np.random.randint(2, size=n)

Target_A = 500
Target_B = 250
Target_C = 250


A = LpVariable.dicts("A", range(0, n), lowBound=0, upBound=1, cat='Boolean')
B = LpVariable.dicts("B", range(0, n), lowBound=0, upBound=1, cat='Boolean')
C = LpVariable.dicts("C", range(0, n), lowBound=0, upBound=1, cat='Boolean')

O1 = LpVariable("O1", cat='Integer')
O2 = LpVariable("O2", cat='Integer')
O3 = LpVariable("O3", cat='Integer')

#objective
prob += O1 + O2 + O3

#constraints
prob += O1 >= Target_A - lpSum(A)
prob += O1 >= lpSum(A) - Target_A
prob += O2 >= Target_B - lpSum(B)
prob += O2 >= lpSum(B) - Target_B
prob += O3 >= Target_C - lpSum(C)
prob += O3 >= lpSum(C) - Target_C

for idx in range(0, n):
    prob += A[idx] + B[idx] + C[idx] <= 1 #cant activate more than 1
    prob += A[idx] <= df['A'][idx] #cant activate if 0
    prob += B[idx] <= df['B'][idx] 
    prob += C[idx] <= df['C'][idx] 

prob.solve()    

print("difference:", prob.objective.value())

这篇关于如何使距给定输入分布的距离最小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆