Availability of high-frequency data, in line with IT developments, enables the use of Availability of high-frequency data, in line with IT developments, enables the use of more information to estimate not only the variance (volatility), but also higher realized moments and the entire realized distribution of returns. Old-fashioned approaches use only closing prices and assume that underlying distribution is time-invariant, which makes traditional forecasting models unreliable. Moreover, time-varying realized moments support findings that returns are not identically distributed across trading days. The objective of the paper is to find an appropriate data-driven distribution of returns using high-frequency data. The kernel estimation method is applied to DAX intraday prices, which balances between the bias and the variance of the realized moments with respect to the bandwidth selection as well as the sampling frequency selection. The main finding is that the kernel bandwidth is strongly related to the sampling frequency at the slow-time-time scale when applying a two-scale estimator, while the fast-time-time scale sampling frequency is held fixed. The realized kernel density estimation enriches the literature by providing the best data-driven proxy of the true but unknown probability density function of returns, which can be used as a benchmark in comparison against ex-ante or implied driven moments.
In the most developed countries the first estimations of Gross Domestic Product (GDP) are available 30 days after the end of the reference quarter. In this paper, possibilities of creating an econometric model for making short-term forecasts of GDP in B&H have been explored. The database consists of more than 100 daily, monthly and quarterly time series for the period 2006q1-2016q4. The aim of this study was to estimate and validate different factor models. Due to the length limit of the series, the factor analysis included 12 time series which had a correlation coefficient with a quarterly GDP at the absolute value greater than 0.8. The principal component analysis (PCA) and the orthogonal varimax rotation of the initial solution were applied. Three principal components are extracted from the set of the series, thus together accounting for 73.34% of the total variability of the given set of series. The final choice of the model for forecasting quarterly B&H GDP was selected based on a comparative analysis of the predictive efficiency of the analysed models for the in-sample period and for the out-of-sample period. The unbiasedness and efficiency of individual forecasts were tested using the Mincer-Zarnowitz regression, while a comparison of the accuracy of forecast of two models was tested by the Diebold-Mariano test. We have examined the justification of a combination of two forecasts using the Granger-Ramanathan regression. A factor model involving three factors has shown to be the most efficient factor model for forecasting quarterly B&H GDP.
An accurate weather forecast is the basis for the valuation of weather derivatives, securities that partially compensate for financial losses to holders in case of, from their perspective, adverse outside temperature. The paper analyses precision of two forecast models of average daily temperature, the Ornstein-Uhlenbeck process (O-U process) and the generalized autoregressive conditional heteroskedastic model (GARCH model) and presumes for the GARCH model to be the more accurate one. Temperature data for the period 2000-2017 were taken from the DHMZ database for the Maksimir station and used as the basis for the 2018 forecast. Forecasted values were compared to the available actual data for 2018 using MAPE and RMSE methods. The GARCH model provides more accurate forecasts than the O-U process by both methods. RMSE stands at 3.75 °C versus 4.53 °C for the O-U process and MAPE is 140.66 % versus 144.55 %. Artificial intelligence and supercomputers can be used for possible improvements in forecasting accuracy to allow for additional data to be included in the forecasting process, such as up-to-date temperatures and more complex calculations.
In today’s modern globalised world, we are faced with numerous data which form the basis for many crucial decisions. The ability to perform the analysis and interpret data correctly is, therefore, of essential importance in practically every field. The field of financial Information is one of such fields, especially the segment of information that is used by companies in corporate communication. This article is focused solely on the information that refers to the Internet financial reporting, which is important particularly for the external stakeholders. The research was made on the basis of the Internet Financial Reporting (IFR) Index for 27 Slovenian companies listed on the Ljubljana Stock Exchange. The article demonstrates the methodological approach to the creation of the IFR Index, which served as the new variable in the following steps of this research, for which various statistical analyses were performed (univariate, bivariate, and multivariate). Based on the descriptive statistics, the main characteristics of the IFR Index are identified, and, by using the t-test for two independent samples, it was found out that there is a difference among the companies listed in the two different listings. Moreover, through the use of the correlation analysis, the correlation between the IFRC and the IFR-P variables was verified, and, in the end, through the use of the multiple regression analysis, it was discovered that the size of the company is the factor which influences the level of the IFR Index. The purpose of this article is to show the awareness of the alternative research methods, and to facilitate the selection of the most appropriate method for addressing particular research questions in the Internet Financial Reporting.
This paper analyses the inefficiency of social services targeting in the Federation of Bosnia and Herzegovina (FB&H). Using official statistics microdata of the Household Budget Survey 2015, three models of social minimum in FB&H were constructed: extreme and general poverty, and the model with multidimensional poverty aspects. The analysis of features of poor household categories showed that the most vulnerable residents of FB&H are not beneficiaries of permanent financial assistance. The reason for such an inefficient targeting was recognized in the Federal Law on Principles of Social Care, Care for the War-Disabled Civilians and Care for Families with Children that stipulates that only persons and families that (cumulatively): are incapable for work, have insufficient income, and there are no family members who are legally obligated to support them. The results indicated a high inconsistency in the legal criteria for qualification, and also in the amounts of permanent social assistance among cantons. The Proxy Means Test (PMT) Model is offered as one of the possible solutions for the improvement of social services targeting in FB&H. Given the importance of efficiency of targeting in social services, the research results could be useful, for both, vulnerable segments of the society and federal and cantonal ministries of labour and social affairs, in the process of targeting the households qualified for social support programmes.
Availability of high-frequency data, in line with IT developments, enables the use of Availability of high-frequency data, in line with IT developments, enables the use of more information to estimate not only the variance (volatility), but also higher realized moments and the entire realized distribution of returns. Old-fashioned approaches use only closing prices and assume that underlying distribution is time-invariant, which makes traditional forecasting models unreliable. Moreover, time-varying realized moments support findings that returns are not identically distributed across trading days. The objective of the paper is to find an appropriate data-driven distribution of returns using high-frequency data. The kernel estimation method is applied to DAX intraday prices, which balances between the bias and the variance of the realized moments with respect to the bandwidth selection as well as the sampling frequency selection. The main finding is that the kernel bandwidth is strongly related to the sampling frequency at the slow-time-time scale when applying a two-scale estimator, while the fast-time-time scale sampling frequency is held fixed. The realized kernel density estimation enriches the literature by providing the best data-driven proxy of the true but unknown probability density function of returns, which can be used as a benchmark in comparison against ex-ante or implied driven moments.
In the most developed countries the first estimations of Gross Domestic Product (GDP) are available 30 days after the end of the reference quarter. In this paper, possibilities of creating an econometric model for making short-term forecasts of GDP in B&H have been explored. The database consists of more than 100 daily, monthly and quarterly time series for the period 2006q1-2016q4. The aim of this study was to estimate and validate different factor models. Due to the length limit of the series, the factor analysis included 12 time series which had a correlation coefficient with a quarterly GDP at the absolute value greater than 0.8. The principal component analysis (PCA) and the orthogonal varimax rotation of the initial solution were applied. Three principal components are extracted from the set of the series, thus together accounting for 73.34% of the total variability of the given set of series. The final choice of the model for forecasting quarterly B&H GDP was selected based on a comparative analysis of the predictive efficiency of the analysed models for the in-sample period and for the out-of-sample period. The unbiasedness and efficiency of individual forecasts were tested using the Mincer-Zarnowitz regression, while a comparison of the accuracy of forecast of two models was tested by the Diebold-Mariano test. We have examined the justification of a combination of two forecasts using the Granger-Ramanathan regression. A factor model involving three factors has shown to be the most efficient factor model for forecasting quarterly B&H GDP.
An accurate weather forecast is the basis for the valuation of weather derivatives, securities that partially compensate for financial losses to holders in case of, from their perspective, adverse outside temperature. The paper analyses precision of two forecast models of average daily temperature, the Ornstein-Uhlenbeck process (O-U process) and the generalized autoregressive conditional heteroskedastic model (GARCH model) and presumes for the GARCH model to be the more accurate one. Temperature data for the period 2000-2017 were taken from the DHMZ database for the Maksimir station and used as the basis for the 2018 forecast. Forecasted values were compared to the available actual data for 2018 using MAPE and RMSE methods. The GARCH model provides more accurate forecasts than the O-U process by both methods. RMSE stands at 3.75 °C versus 4.53 °C for the O-U process and MAPE is 140.66 % versus 144.55 %. Artificial intelligence and supercomputers can be used for possible improvements in forecasting accuracy to allow for additional data to be included in the forecasting process, such as up-to-date temperatures and more complex calculations.
In today’s modern globalised world, we are faced with numerous data which form the basis for many crucial decisions. The ability to perform the analysis and interpret data correctly is, therefore, of essential importance in practically every field. The field of financial Information is one of such fields, especially the segment of information that is used by companies in corporate communication. This article is focused solely on the information that refers to the Internet financial reporting, which is important particularly for the external stakeholders. The research was made on the basis of the Internet Financial Reporting (IFR) Index for 27 Slovenian companies listed on the Ljubljana Stock Exchange. The article demonstrates the methodological approach to the creation of the IFR Index, which served as the new variable in the following steps of this research, for which various statistical analyses were performed (univariate, bivariate, and multivariate). Based on the descriptive statistics, the main characteristics of the IFR Index are identified, and, by using the t-test for two independent samples, it was found out that there is a difference among the companies listed in the two different listings. Moreover, through the use of the correlation analysis, the correlation between the IFRC and the IFR-P variables was verified, and, in the end, through the use of the multiple regression analysis, it was discovered that the size of the company is the factor which influences the level of the IFR Index. The purpose of this article is to show the awareness of the alternative research methods, and to facilitate the selection of the most appropriate method for addressing particular research questions in the Internet Financial Reporting.
This paper analyses the inefficiency of social services targeting in the Federation of Bosnia and Herzegovina (FB&H). Using official statistics microdata of the Household Budget Survey 2015, three models of social minimum in FB&H were constructed: extreme and general poverty, and the model with multidimensional poverty aspects. The analysis of features of poor household categories showed that the most vulnerable residents of FB&H are not beneficiaries of permanent financial assistance. The reason for such an inefficient targeting was recognized in the Federal Law on Principles of Social Care, Care for the War-Disabled Civilians and Care for Families with Children that stipulates that only persons and families that (cumulatively): are incapable for work, have insufficient income, and there are no family members who are legally obligated to support them. The results indicated a high inconsistency in the legal criteria for qualification, and also in the amounts of permanent social assistance among cantons. The Proxy Means Test (PMT) Model is offered as one of the possible solutions for the improvement of social services targeting in FB&H. Given the importance of efficiency of targeting in social services, the research results could be useful, for both, vulnerable segments of the society and federal and cantonal ministries of labour and social affairs, in the process of targeting the households qualified for social support programmes.