C ++ - 十进制值转换 [英] C++ - Decimal to binary converting
本文介绍了C ++ - 十进制值转换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
@edit:LOL我看了一下,facepalmed。昨天我做了减速与斌:
无效toBinary(INT N)
{
性病::串R;
而(正!= 0){R =(N%2 == 00:1)+ R; N / = 2;}
返回ř;
}
我写了一个'简单'(我花了30分钟)程序,十进制数转换为二进制文件。我相信有很多简单的方法可以让你告诉我?
这里的code:
的#include<&iostream的GT;
#包括LT&;&stdlib.h中GT;使用命名空间std;
INT A1,A2,剩余;
INT标签= 0;
INT maxtab = 0;
INT表[0];
诠释的main()
{
系统(清除);
COUT<< 输入一个十进制数;
CIN>> A1;
A2 = A1; //我们需要我们的号码以后,所以我们将其保存在另一个变量 而(A1!= 0)//除以二,直到我们打0
{
余数= A1%2; //取得一余 - 十进制数(0或1)
A1 = A1 / 2; //我们除以数量由两个
maxtab ++; // + 1到表中的最大元件
} maxtab--; // - 1表中的最大元素(分结束时,它补充说,我们不希望增加1 elemnt,它是等于0)
A1 = A2; //我们必须做的计算,可以有更多的时间让我们回到盖特我们原来的号码
表[0] =表[maxtab] //我们设置元素的数量在我们的表maxtab(我们没有得到0的10片) 而(A1!= 0)//相同的计算第二时间但加入每1或0(余数),以表分离元件
{
余数= A1%2; //得到的余数
A1 = A1 / 2; //除以2
表[标签] =余数; //加0或1到元素
标签++; //标签(元素计),由1等下次余增加了保存在另一个元素
} 标签 - ; //一样与maxtab--
COUT<< 您的二进制数: 而(标签> = 0)//直到我们得到表中的0(1)元素
{
COUT<<表[标签<< ; //写元素的值(0或1)
标签 - ; // 1下降,所以我们展现0和1的从后面(正确的方法)
} COUT<< ENDL;
返回0;
}
顺便提一下它的复杂,但我尽力了。
解决方案
的#include<&iostream的GT;
#包括LT&;&位集GT;诠释的main()
{
标准::字符串二进制=的std :: bitset的< 8'(128).to_string(); //为二进制
性病::法院LT&;<二进制LT;<\\ n; 无符号长小数=的std :: bitset的< 8'(二进制).to_ulong();
性病::法院LT&;<&小数LT;<\\ n;
返回0;
}
@edit: LOL I looked at it and facepalmed. Yesterday I made dec to bin with:
void toBinary(int n)
{
std::string r;
while(n!=0) {r=(n%2==0 ?"0":"1")+r; n/=2;}
return r;
}
I wrote a 'simple' (it took me 30 minutes) program that converts decimal number to binary. I am SURE that there's a lot simpler way so can you show me? Here's the code:
#include <iostream>
#include <stdlib.h>
using namespace std;
int a1, a2, remainder;
int tab = 0;
int maxtab = 0;
int table[0];
int main()
{
system("clear");
cout << "Enter a decimal number: ";
cin >> a1;
a2 = a1; //we need our number for later on so we save it in another variable
while (a1!=0) //dividing by two until we hit 0
{
remainder = a1%2; //getting a remainder - decimal number(1 or 0)
a1 = a1/2; //dividing our number by two
maxtab++; //+1 to max elements of the table
}
maxtab--; //-1 to max elements of the table (when dividing finishes it adds 1 additional elemnt that we don't want and it's equal to 0)
a1 = a2; //we must do calculations one more time so we're gatting back our original number
table[0] = table[maxtab]; //we set the number of elements in our table to maxtab (we don't get 10's of 0's)
while (a1!=0) //same calculations 2nd time but adding every 1 or 0 (remainder) to separate element in table
{
remainder = a1%2; //getting a remainder
a1 = a1/2; //dividing by 2
table[tab] = remainder; //adding 0 or 1 to an element
tab++; //tab (element count) increases by 1 so next remainder is saved in another element
}
tab--; //same as with maxtab--
cout << "Your binary number: ";
while (tab>=0) //until we get to the 0 (1st) element of the table
{
cout << table[tab] << " "; //write the value of an element (0 or 1)
tab--; //decreasing by 1 so we show 0's and 1's FROM THE BACK (correct way)
}
cout << endl;
return 0;
}
By the way it's complicated but I tried my best.
解决方案
#include <iostream>
#include <bitset>
int main()
{
std::string binary = std::bitset<8>(128).to_string(); //to binary
std::cout<<binary<<"\n";
unsigned long decimal = std::bitset<8>(binary).to_ulong();
std::cout<<decimal<<"\n";
return 0;
}
这篇关于C ++ - 十进制值转换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文