Performance Analysis of BigDecimal Arithmetic Operation in Java

Jos Timanta Tarigan, Elviawaty M. Zamzami, Cindy Laurent Ginting

Abstract


The Java programming language provides binary floating-point primitive data types such as float and double to represent decimal numbers. However, these data types cannot represent decimal numbers with complete accuracy, which may cause precision errors while performing calculations. To achieve better precision, Java provides the BigDecimal class. Unlike float and double, which use approximation, this class is able to represent the exact value of a decimal number. However, it comes with a drawback: BigDecimal is treated as an object and requires additional CPU and memory usage to operate with. In this paper, statistical data are presented of performance impact on using BigDecimal compared to the double data type. As test cases, common mathematical processes were used, such as calculating mean value, sorting, and multiplying matrices.

Keywords


BigDecimal arithmetic operation; floating-point arithmetic; numerical programming; optimization; programming language

Full Text:

PDF

References


Burden, R.L., Faires, J.D. & Burden, A.M., Numerical Analysis, 10th edition, Boston, MA, United States: Cengage Learning, 2016.

IEEE Computer Society, Microprocessor Standards Committee, Institute of Electrical and Electronics Engineers, and IEEE-SA Standards Board, 754-1985 – IEEE Standard for Floating-point Arithmetic, New York, NY: Institute of Electrical and Electronics Engineers, 1985.

IEEE Computer Society, Microprocessor Standards Committee, Institute of Electrical and Electronics Engineers, and IEEE-SA Standards Board, 754-2008- IEEE Standard for Floating-point Arithmetic, New York, NY: Institute of Electrical and Electronics Engineers, 2008.

Cass, S., The 2017 Top Programming Languages, IEEE Spectrum, 18-Jul-2017.

Bull, J.M., Smith, L.A., Pottage, L. & Freeman, R., Benchmarking Java against C and Fortran for Scientific Applications, Proceedings of the 2001 Joint ACM-ISCOPE Conference, Palo Alto, California, pp. 97-105, 2001.

Moreira, J.E., Midkiff, S.P. & Gupta, M., From Flop to Megaflops: Java for Technical Computing, ACM Transactions on Programming Languages and Systems, 22(2), pp. 265-295, Mar. 2000.

Cowlishaw, M.F., Decimal Floating-point Algorithm for Computers, Proceedings of 16th Symposium on Computer Arithmetic, pp. 104-111, 2003.

Kamble, L., Palsodkar, P. & Palsodkar, P. Research Trends in Development of Floating Point Computer Arithmetic, pp. 0329-0333, International Conference on Communication and Signal Processing (ICCSP), 2017.

Erle, M.A. Schulte, M.J. & Linebarger, J.M., Potential Speedup using Decimal Floating-point Hardware, Conference Record of the 36th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, California, 2, pp. 1073-1077, 2002.

Pham-Quoc, C. Tran-Thanh, B. & Thinh, T.N., A Scalable FPGA-based Floating-Point Gaussian Filtering Architecture, 2017 International Conference on Advanced Computing and Application (ACOMP), Ho Chi Minh City, Vietnam, pp. 111-116, 2017.

Jia, X., Wu, G. & Xie, X., A High-Performance Accelerator for Floating-Point Matrix Multiplication, 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC), Guangzhou, China, pp. 396-402, 2017.

O’uchi, S.I., Hiroshi, F., Tsutomu, I., Wakana, N., Takashi, M., Tomohiro, K., Ryousei, T., Image-Classifier Deep Convolutional Neural Network Training by 9-bit Dedicated Hardware to Realize Validation Accuracy and Energy Efficiency Superior to the Half Precision Floating Point Format, 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, pp. 1-5, 2018.

Joldes, M., Marty, O., Muller, J-M., & Popescu, V., Arithmetic Algorithms for Extended Precision Using Floating-Point Expansions, IEEE Transactions on Computers, 65(4), pp. 1197-1210, Apr. 2016.

Muller, J.-M., Popescu, V. & Tang, P.T.P., A New Multiplication Algorithm for Extended Precision Using Floating-point Expansions, presented at the IEEE 23rd Symposium on Computer Airthmetic (ARITH), Santa Clara, California, pp. 39-46, 2016.

Rubio-González, C., Nguyen, C., Hong Diep, N., Demmel, J., William, K., Sen, K., Bailey, D.H., Iancu, C., Hough, D., Precimonious: Tuning Assistant for Floating-point Precision, Proceedings of the International Conference on High Performance Computing, Networking, Storage, and Analysis, Denver, Colorado, pp. 1-12, 2013.

Rubio-González, C., Hough, D., Nguyen, C., Sen, K., Demmel, J., William, K., Iancu, C., Lavrijsen, W., Bailey, D.H., Floating-point Precision Tuning using Blame Analysis, Proceedings of the 38th International Conference on Software Enginering, Austin, Texas, pp. 1074-1085, 2016.

Ho, N-M., Manogaran, E., Wong, W.-F. & Anoosheh, A., Efficient Floating Point Precision Tuning for Approximate Computing, Proceedings of 22nd Asia and South Pacific Design Automation Conference, Chiba, Japan, pp. 63-68, 2017.

Goodloe, A. E., Muñoz, C., Kirchner, F. & Correnson, L. Verification of Numerical Programs: From Real Numbers to Floating Point Numbers, in NASA Formal Methods, 7871, G. Brat, N. Rungta, and A. Venet, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 441-446, 2013.

Chiang, W-F., Gopalakrishnan, G., Rakamaric, Z. & Solovyev, A., Efficient Search for Inputs Causing High Floating-point Errors, ACM SIGPLAN Notices, 49(8), pp. 43-52, Feb. 2014.




DOI: http://dx.doi.org/10.5614%2Fitbj.ict.res.appl.2018.12.3.5

Refbacks

  • There are currently no refbacks.


Contact Information:

ITB Journal Publisher, LPPM – ITB, 

Center for Research and Community Services (CRCS) Building Floor 7th, 
Jl. Ganesha No. 10 Bandung 40132, Indonesia,

Tel. +62-22-86010080,

Fax.: +62-22-86010051;

e-mail: jictra@lppm.itb.ac.id.