在Matlab中将十进制转换为二进制? [英] Convert decimal to binary in Matlab?

查看:476
本文介绍了在Matlab中将十进制转换为二进制?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将基数为10的数字转换为基数为2的数字,并指定我想用来表示这些基数为10的数字的位数。

这里是我的负数的代码:

  function output = DTB(decimal,binary)
if decimal< 0
smallestNum = - (2 ^(bits-1));

如果十进制< smallestNum
错误('%d不能用%d位表示,增加位数。',decimal,bits);
output ='';
end

output ='1';
位=位 - 1;

if smallestNum ==十进制
while bits〜= 0
output = [output,'0'];
位=位 - 1;
end
end

num = smallestNum;
while num〜= decimal
num = smallestNum + 2 ^(bits-1);
if num>十进制
output = [output,'0'];
else
output = [output,'1'];
smallestNum = smallestNum + 2 ^(bits-1);
end
bits = bits - 1;
end

while bits〜= 0
output = [output,'0'];
位=位 - 1;
end
end

这很好。我遇到的问题(奇怪的是,因为从正数到二进制应该更容易)是正整数。它应该是对负数算法的一个小调整,对吧?例如,正数字段在小数 = 8和 = 6的情况下不起作用不适用于2的不同权力)。这是什么错误,在这里?



以下是正整数的代码:

 如果小数点> 0 
largestNum =(2 ^(bits-1)) - 1;

如果小数点> maximumNum
错误('%d不能用%d位表示。增加位数。',decimal,bits);
output ='';
end

%第一个点必须为零才能显示它是正数
output ='0';
位=位 - 1;

largestNum = largestNum + 1;
num = largestNum;

while num〜= decimal
num = largestNum - 2 ^(bits-1);
if num>十进制
output = [output,'0'];
结束
如果num <=十进制
output = [output,'1'];
largestNum = largestNum - 2 ^(bits-1);
end
bits = bits - 1;
end

while bits〜= 0
output = [output,'0'];
位=位 - 1;
end


解决方案

当你在输出数组中放置一个零时,因为你本质上是从一个二进制数组开始的(即最大数)。这段代码适用于我:

  if decimal> 0 
largestNum =(2 ^(bits-1)) - 1;

如果小数点> maximumNum
错误('%d不能用%d位表示。增加位数。',decimal,bits);
output ='';
end

第一个点必须为零才能显示它是一个正数
output ='0';
位=位 - 1;

largestNum = largestNum + 1;
num = largestNum;

while num〜= decimal
num = largestNum - 2 ^(bits-1);
if num>十进制
output = [output,'0'];
largestNum = largestNum - 2 ^(bits-1);
结束
如果num <=十进制
output = [output,'1'];

end
bits = bits - 1;
end

while bits〜= 0
output = [output,'0'];
位=位 - 1;
end
end

我不确定这是为了什么,但我强烈建议使用内置的 dec2bin 来执行此操作。

I am converting base-10 numbers to base-2 numbers, and specifying the number of bits I'd like to use to represent these base-10 numbers.

Here's my code for negative numbers:

function output = DTB(decimal,binary)
if decimal < 0
    smallestNum = -(2^(bits-1));

    if decimal < smallestNum
        error('%d cannot be represented in %d bits. Increase the number of bits. ',decimal,bits);
        output = '';
    end

    output = '1';
    bits = bits - 1;

    if smallestNum == decimal
        while bits ~= 0
            output = [output,'0'];
            bits = bits - 1;
        end
    end

    num = smallestNum;
    while num ~= decimal
        num = smallestNum + 2^(bits-1);
        if num > decimal
            output = [output,'0'];
        else
            output = [output,'1'];
            smallestNum = smallestNum + 2^(bits-1);
        end
        bits = bits - 1;
    end

    while bits ~= 0
        output = [output,'0'];
        bits = bits - 1;
    end
end

This works fine. The issue I'm running into (oddly enough, since going from positive decimals to binary should be easier) is with positive integers. It should just be a minor tweak to the negative number algorithm, right? The positive number piece does not work in the case of decimal = 8 and bits = 6, for example (it doesn't work for different powers of 2). What's wrong, here?

Here's the code for positive integers:

if decimal > 0
    largestNum = (2^(bits-1))-1;

    if decimal > largestNum
        error('%d cannot be represented in %d bits. Increase the number of bits. ',decimal,bits);
        output = '';
    end

    % first spot must be zero to show it's a positive number
    output = '0';
    bits = bits - 1;

    largestNum = largestNum + 1;
    num = largestNum;

    while num ~= decimal
        num = largestNum - 2^(bits-1);
        if num > decimal
            output = [output,'0'];
        end
        if num <= decimal
            output = [output,'1'];
            largestNum = largestNum - 2^(bits-1);
        end
        bits = bits - 1;
    end

    while bits ~= 0
        output = [output,'0'];
        bits = bits - 1;
    end

解决方案

You need to reduce largest num when you put a zero in the output array, because you're essentially starting from a binary array of all ones (ie largestNum). This code worked for me:

if decimal > 0
largestNum = (2^(bits-1))-1;

if decimal > largestNum
    error('%d cannot be represented in %d bits. Increase the number of bits. ',decimal,bits);
    output = '';
end

% first spot must be zero to show it\'s a positive number
output = '0';
bits = bits - 1;

largestNum = largestNum + 1;
num = largestNum;

while num ~= decimal
    num = largestNum - 2^(bits-1);
    if num > decimal
        output = [output,'0'];
        largestNum = largestNum - 2^(bits-1);
    end
    if num <= decimal
        output = [output,'1'];

    end
    bits = bits - 1;
end

while bits ~= 0
    output = [output,'0'];
    bits = bits - 1;
end
end

I'm not sure what this is for, but I would highly recommend using the built in dec2bin to do this.

这篇关于在Matlab中将十进制转换为二进制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆